WO2023011063A1 - 虚拟世界中的声音提示方法、装置、设备及存储介质 - Google Patents

虚拟世界中的声音提示方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2023011063A1
WO2023011063A1 PCT/CN2022/102593 CN2022102593W WO2023011063A1 WO 2023011063 A1 WO2023011063 A1 WO 2023011063A1 CN 2022102593 W CN2022102593 W CN 2022102593W WO 2023011063 A1 WO2023011063 A1 WO 2023011063A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
indicator
azimuth
virtual character
visual representation
Prior art date
Application number
PCT/CN2022/102593
Other languages
English (en)
French (fr)
Inventor
周婧
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023011063A1 publication Critical patent/WO2023011063A1/zh
Priority to US18/322,031 priority Critical patent/US20230285859A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5378Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5375Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game

Definitions

  • the embodiments of the present application relate to the field of human-computer interaction, and in particular to a sound prompt method, device, device and storage medium in a virtual world.
  • the user can operate the game characters in the game program for competitive confrontation.
  • the game program provides a virtual world, and game characters are virtual characters located in the virtual world.
  • the game screen and mini-map controls are displayed on the terminal.
  • the game screen is a screen obtained by observing the virtual world from the perspective of the current game character
  • the mini-map control is a control for displaying an overhead map of the virtual world.
  • the sound icon will be displayed on the mini-map control, so as to prompt the gunshot on the mini-map space according to the sound icon , footsteps, mufflers, etc.
  • point A on the mini-map control displays a sound icon and the sound icon is a pair of footprints, which indicates that there are other game characters walking in the virtual world corresponding to point A on the mini-map control.
  • the above-mentioned sound icon cannot provide the specific position of the sound source in the virtual world, and the effective information it can provide is limited. It is beneficial for game characters to carry out deeper confrontation in the virtual world.
  • the present application provides a sound prompting method, device, device and storage medium in a virtual world, which can simultaneously indicate the horizontal and vertical directions of a sound source in the virtual world through a sound indicator. Described technical scheme is as follows:
  • a method for sound prompting in a virtual world comprising:
  • compass information is displayed on the perspective picture, the compass information includes a sequence of azimuth scales, and the azimuth scales in the sequence of azimuth scales are used to indicate that the first avatar character is in the virtual world The horizontal orientation of the middle face;
  • the first sound indicator is used to indicate the horizontal orientation and vertical orientation corresponding to the first sound source.
  • a sound prompting device in a virtual world comprising:
  • the display module is used to display the angle of view picture of the first virtual character, compass information is displayed on the angle of view picture, and the compass information includes an azimuth scale sequence, and the azimuth scale in the azimuth scale sequence is used to indicate the first The horizontal orientation of the virtual character facing in the virtual world;
  • control module configured to control the activities of the first virtual character in the virtual world
  • the display module is further configured to, when the first virtual character is active in the virtual world, if there is a first sound source in the surrounding environment of the first virtual character, based on the azimuth scale sequence
  • the first azimuth scale in displays a first sound indicator, and the first sound indicator is used to indicate the horizontal azimuth and vertical azimuth corresponding to the first sound source.
  • a computer device includes: a processor and a memory, at least one program is stored in the memory, and the at least one program is loaded and executed by the processor to Realize the sound prompting method in the virtual world as described in the foregoing aspect.
  • a computer-readable storage medium is provided. At least one program is stored in the computer-readable storage medium, and the at least one program is loaded and executed by a processor to implement the above-mentioned aspects.
  • a computer program product is provided.
  • the processor is enabled to implement the method for sound prompting in a virtual world as described in the foregoing aspect.
  • compass information is displayed on the viewing angle screen where the first avatar is located, and the compass information includes the azimuth scale sequence, so it can be based on the azimuth scale sequence.
  • the first azimuth scale displays the first sound indicator, which can simultaneously indicate the horizontal and vertical azimuths corresponding to the first sound source through the first sound indicator, so that the user can accurately judge the specific spatial position of the first sound source only by visual representation , in a hearing-limited scene without external sound or earphones, enough effective spatial information can also be obtained for the sound source, which is beneficial for the first virtual character to carry out a deeper confrontation in the virtual world.
  • Fig. 1 is a structural block diagram of a computer system provided by an exemplary embodiment of the present application
  • Fig. 2 is a flow chart of a sound prompting method in a virtual world provided by an exemplary embodiment of the present application
  • Fig. 3 is a schematic interface diagram of a virtual environment screen provided by an exemplary embodiment of the present application.
  • Fig. 4 is a schematic diagram of three sound prompts in the vertical direction provided by an exemplary embodiment of the present application.
  • Fig. 5 is a flow chart of a sound prompting method in a virtual world provided by an exemplary embodiment of the present application
  • Fig. 6 is a schematic diagram of an interface for prompting the sound above provided by an exemplary embodiment of the present application.
  • Fig. 7 is a schematic diagram of an interface prompting two types of sounds provided by an exemplary embodiment of the present application.
  • Fig. 8 is a schematic diagram of an interface for prompting sound with an icon style provided by an exemplary embodiment of the present application.
  • Fig. 9 is a schematic diagram of prompting sounds at different distances provided by an exemplary embodiment of the present application.
  • Fig. 10 is a schematic interface diagram of a second sound indicator provided by an exemplary embodiment of the present application.
  • Fig. 11 is a schematic diagram of vertical sound confirmation provided by an exemplary embodiment of the present application.
  • Fig. 12 is a configuration diagram of the influence of the helmet earphone on the sound prompt provided by an exemplary embodiment of the present application
  • Fig. 13 is a configuration diagram of the influence of the muffler on the sound prompt provided by an exemplary embodiment of the present application
  • Fig. 14 is a configuration diagram of the impact of trampling materials on sound prompts provided by an exemplary embodiment of the present application.
  • Fig. 15 is a configuration diagram of influence coefficients of different sound types provided by an exemplary embodiment of the present application.
  • Fig. 16 is a configuration diagram of different firearm type parameters provided by an exemplary embodiment of the present application.
  • FIG. 17 is a configuration diagram of general configuration parameters of prompt sounds provided by an exemplary embodiment of the present application.
  • Fig. 18 is a flow chart of a sound prompting method in a virtual world provided by an exemplary embodiment of the present application.
  • Fig. 19 is a structural block diagram of a terminal provided by an exemplary embodiment of the present application.
  • Fig. 20 is a schematic structural diagram of a server provided by an exemplary embodiment of the present application.
  • first person shooting game First Person Shooting, referred to as FPS
  • third person shooting game TPS
  • the first-person shooting game is a shooting game played from the player's main perspective. Players no longer manipulate the virtual characters displayed on the screen to play the game like other game types, but experience the game from the immersive main perspective. the visual impact.
  • third-person shooting games is that in first-person shooting games, only the protagonist’s field of vision is displayed on the screen.
  • the game characters controlled by the player are visible on the game screen, which emphasizes more on the sense of action.
  • Virtual world is the virtual world displayed (or provided) when the application program is running on the terminal.
  • the virtual world may be a three-dimensional virtual world or a two-dimensional virtual world.
  • the three-dimensional virtual world can be a simulated environment of the real world, a semi-simulated and semi-fictitious environment, or a purely fictitious environment.
  • the following embodiments are illustrated by taking the virtual world as a three-dimensional virtual world as an example, but this is not limited thereto.
  • the virtual world is also used for virtual scene battles between at least two virtual characters.
  • the virtual scene is also used for fighting with virtual firearms between at least two virtual characters.
  • Virtual character refers to the movable object in the virtual world.
  • the movable object may be a simulated character or an anime character in the virtual world.
  • the virtual object is a three-dimensional model created based on animation skeleton technology.
  • Each virtual object has its own shape and volume in the three-dimensional virtual scene, and occupies a part of the space in the three-dimensional virtual scene.
  • the virtual characters may be individuals in the virtual world that can independently make different sounds, including the first virtual character, the second virtual character, etc., which respectively represent independent individuals who make different sounds. Individuals who are making sounds can act as sound sources.
  • Sound indicator It is a visual control used to indicate sound information in the virtual world.
  • the visual control has one or more visual representations, and each visual representation is used to represent a kind of sound information.
  • the types of sound information include: horizontal orientation , vertical orientation, sound type, sound volume, sound distance, and at least one of the action frequency of the sound source.
  • Visual representation of the sound indicator refers to the display effect displayed on the sound indicator that can be visually captured by the user.
  • Each visual expression includes: one or more combinations of shape, pattern, color, texture, text, animation effect, start display time, continuous display time, and blanking time. Different kinds of visual representations exist differently. Optionally, visual representations of different dimensions are simultaneously superimposed on the same sound indicator to present different information.
  • Fig. 1 shows a schematic structural diagram of a computer system provided by an exemplary embodiment of the present application.
  • the computer system 100 includes: a first terminal 120 , a server cluster 140 and a second terminal 160 .
  • the first terminal 120 is installed and runs a game program supporting a virtual environment.
  • the game program may be a first-person shooter game or a third-person shooter game.
  • the first terminal 120 may be a terminal used by the first user, and the first user uses the first terminal 120 to operate the first virtual character in the virtual world to carry out activities, such activities include but not limited to: sprinting, climbing, squatting, walking quietly At least one of walking, crawling, squatting, single firing, continuous firing, Non-Player Character (NPC) shouting, rubbing grass, wounded shouting, near-death shouting, explosion, walking .
  • NPC Non-Player Character
  • the first virtual character is a first virtual character.
  • the first terminal 120 is connected to the server cluster 140 through a wireless network or a wired network.
  • the server cluster 140 includes at least one of a server, multiple servers, a cloud computing platform, and a virtualization center.
  • the server cluster 140 is used to provide background services for applications supporting virtual environments.
  • the server cluster 140 undertakes the main calculation work, and the first terminal 120 and the second terminal 160 undertake the secondary calculation work; or, the server cluster 140 undertakes the secondary calculation work, and the first terminal 120 and the second terminal 160 undertake the main calculation work work; or, among the server cluster 140, the first terminal 120 and the second terminal 160, a distributed computing architecture is used to perform collaborative computing.
  • the second terminal 160 is installed and runs a game program supporting a virtual environment.
  • the game program may be a first-person shooter game or a third-person shooter game.
  • the second terminal 160 may be a terminal used by the second user, and the second user uses the second terminal 160 to operate the second virtual character located in the virtual world to perform activities, such activities include but not limited to: sprinting, climbing, squatting, walking quietly At least one of walking, crawling quietly, squatting quietly, single-shot firing, continuous firing, NPC shouting, rubbing grass, injured shouting, near-death shouting, explosion, and walking.
  • the second virtual character is a second virtual character.
  • the first virtual character and the second virtual character may belong to the same team, the same organization, have friendship or have temporary communication authority.
  • the application programs installed on the first terminal 120 and the second terminal 160 are the same, or the same type of application programs on different platforms.
  • the first terminal 120 may generally refer to one of the multiple terminals
  • the second terminal 160 may generally refer to one of the multiple terminals. This embodiment only uses the first terminal 120 and the second terminal 160 as an example for illustration.
  • the first terminal 120 and the second terminal 160 may be desktop devices or mobile devices.
  • the types of mobile devices of the first terminal 120 and the second terminal 160 are the same or different, and the mobile devices include: smart phones, tablet computers, and include but not only All portable electronic devices limited to this.
  • Fig. 2 shows a flow chart of a sound prompting method in a virtual world provided by an exemplary embodiment of the present application.
  • the method can be performed by the first terminal 120 or the second terminal 160 shown in FIG. 1, and the first terminal 120 or the second terminal 160 can be collectively referred to as a terminal.
  • the method includes the following steps:
  • Step 202 Displaying the perspective screen of the first virtual character, compass information is displayed on the perspective screen, and the compass information includes a sequence of azimuth scales;
  • the compass information (or compass control) is used to use the foothold of the first virtual character in the virtual world as a reference point to indicate the various horizontal orientations that the first virtual character faces in the virtual world.
  • the horizontal orientation is represented by the longitude in the virtual world, such as 20 degrees east longitude, 160 degrees west longitude and so on.
  • a first virtual character 10 is displayed on the perspective screen.
  • the first virtual character 10 can be any movable object in the virtual world, for example, it can be a soldier in the virtual world.
  • the movement wheel 12 is used to control the movement of the first virtual character 10 in the virtual world
  • the skill button 14 is used to control the first virtual character 10 to release skills or use items in the virtual world.
  • the compass information 16 displays a sequence of azimuth scales.
  • the azimuth scale sequence may be a sequence of multiple azimuth scales.
  • the azimuth scales in the azimuth scale sequence are used to indicate the horizontal orientation that the first virtual character faces in the virtual world.
  • the azimuth scale sequence includes 7 azimuth scales: 165 degrees, south, 195 degrees, 215 degrees, southwest, 240 degrees, and 255 degrees.
  • the orientation scale of 215 degrees is used to indicate the horizontal orientation directly in front of the first virtual character 10 .
  • Step 204 controlling the activities of the first virtual character in the virtual world
  • the user can control the first virtual character to move in the virtual world.
  • the activities here may include various forms of activities, such as moving, releasing skills, using items, and so on. Different activities can have different control methods.
  • the user can control the first virtual character 10 to move by moving the wheel 12, and the user can also control the first virtual character to release skills by pressing one or more preset skill buttons 14. Use items.
  • the user can also control the first avatar through signals generated by long pressing, clicking, double-clicking and/or sliding on the touch screen.
  • Step 206 When the first virtual character is active in the virtual world, if there is a first sound source in the surrounding environment of the first virtual character, display the first sound indicator based on the first orientation scale in the orientation scale sequence, The first sound indicator is used to indicate the horizontal orientation and vertical orientation of the first sound source.
  • the surrounding environment of the first virtual character is a virtual environment within a three-dimensional spherical range with the first virtual character as the center and a preset distance as the radius.
  • the surrounding environment of the first virtual character is a virtual environment centered on the first virtual character, with a preset distance as a radius and located within a three-dimensional hemispherical range on the ground plane.
  • the first sound source may be a virtual element capable of emitting sound in the virtual world, such as a second virtual character (friendly or enemy or NPC), virtual vehicle, virtual flying object, various offensive weapons, virtual animals, etc. .
  • the second virtual character may be other virtual characters in the virtual world except the first virtual character, and the number of the second virtual character is at least one. Since the virtual world is a digitally simulated environment, the sound in this application may refer to a sound event in the digital world, and the sound event is represented by a set of parameters.
  • a set of parameters of a sound event includes, but is not limited to: the three-dimensional coordinates of the sound source in the virtual world, the type of the sound source, the type of material the sound source touches, the type of sound, the size of the original sound, and at least one of the equipment wearing conditions of the sound source. A sort of.
  • first sound source in the surrounding environment of the first virtual character
  • a virtual element also referred to as an individual in the virtual world
  • the first sound indicator can be displayed based on the first azimuth scale in the sequence of azimuth scales.
  • the first sound indicator is used to indicate the horizontal orientation and vertical orientation of the first sound source.
  • the horizontal orientation is the orientation divided along the horizontal direction with the first virtual object as the center, such as the longitude in the virtual world.
  • the vertical orientation is the orientation divided along the vertical direction with the first virtual object as the center, such as the pitch angle of the sound source (for example, the first sound source) relative to the first virtual character.
  • the vertical orientation is represented by a vertical orientation scale, which is similar to latitude; or, the vertical orientation is represented by an altitude; or, since the avatar has limited space for activities in the vertical direction, the vertical orientation can be simplified or abstracted as: Upper orientation, middle orientation and lower orientation.
  • the horizontal orientation of the first sound source may be the first horizontal orientation
  • the vertical orientation of the first sound source may be the first vertical orientation
  • the first horizontal orientation of the first sound source may be scaled by the first orientation
  • the first vertical orientation of the first sound source may be represented by a first visual representation of the first sound indicator, which may be a shape, pattern, color, texture, text, and animation of the first sound indicator at least one of the effects.
  • the terminal may display a first sound indicator with a first visual representation based on the first azimuth scale in the azimuth scale sequence, the center position of the first sound indicator is aligned with the first azimuth scale, and the first azimuth scale is used for The horizontal orientation of the first sound source is indicated, and the first visual representation is used to indicate the vertical orientation of the first sound source.
  • the method provided by this embodiment displays the first sound indicator only based on the first azimuth scale in the azimuth scale sequence when the first sound source exists in the surrounding environment of the first virtual character, without
  • the small map is used to prompt the position
  • the first sound indicator can also indicate the first horizontal orientation and the first vertical orientation corresponding to the first sound source at the same time, so that the user can accurately judge the spatial position of the first sound source only by visual representation.
  • it can also obtain enough effective spatial information for the sound source. Exemplarily as shown in FIG.
  • the first sound indicator is displayed based on the first azimuth scale "Southwest” in the compass information 16 19.
  • the first azimuth scale "southwest” is used to indicate that the horizontal azimuth of the first sound source is southwest.
  • the shape of the first sound indicator 19 is used to indicate that the vertical orientation of the first sound source is the middle orientation.
  • the first visual representation as a shape as an example, in conjunction with the implementation in (a) of FIG. Upper orientation; when the shape of the first sound indicator 19 is shuttle-shaped, it represents that the first sound source is located in the middle orientation of the first virtual character; If the shape is a lower triangle, it means that the first sound source is located at the lower part of the first virtual character.
  • the first sound source is located at the upper position of the first virtual character; when the arrow on the left side of the first sound indicator 19 is a circle, it means that the first sound source is located at the middle position of the first virtual character; When the arrow on the left side of the instrument 19 is downward, it means that the first sound source is located at the lower position of the first avatar.
  • the first sound indicator 19 includes three grids arranged vertically. If the uppermost grid of the three grids is filled with color, it means that the first sound source is located at the upper position of the first avatar; if the middle grid of the three grids is filled with color, it means that the first sound source is located at The middle position of an avatar; if the bottommost grid of the three grids is filled with color, it means that the first sound source is located at the lower position of the first avatar.
  • the first visual representation as the shape of the first sound indicator 19 and additional numbers as an example, in conjunction with the implementation of (d) in FIG.
  • 100m it means that the first sound source is located in the upper position of the first virtual character and the height above the ground is 100 meters; in the case of the shape of the first sound indicator 19 is shuttle-shaped, it means that the first sound source Located in the middle position of the first virtual character; when the shape of the first sound indicator 19 is a lower triangle and carries the number "-15m”, it means that the first sound source is located in the first virtual character. The lower part of the character and the height below the ground is 15 meters.
  • the method provided in this embodiment can prompt various sound information of the first sound source based on the first sound indicator near the compass information when there is no mini-map control on the user interface.
  • the sound can be multifaceted based on various visual representations on the first sound indicator, and occupy a very small screen area; in the absence of the first sound source , the number of Head Up Display (HUD) controls on the entire user interface will be as few as possible, so that the user interface is more concise and effective, and can bring users a more immersive program experience.
  • HUD Head Up Display
  • Fig. 5 shows a flow chart of a sound prompting method in a virtual world provided by an exemplary embodiment of the present application.
  • the method can be performed by the first terminal 120 or the second terminal 160 shown in FIG. 1, and the first terminal 120 or the second terminal 160 can be collectively referred to as a terminal.
  • the method includes the following steps:
  • Step 202 Displaying the perspective screen of the first virtual character, compass information is displayed on the perspective screen, and the compass information includes a sequence of azimuth scales;
  • the first virtual character is a virtual object that the first user is controlling.
  • the perspective picture of the first virtual character is a picture obtained by observing the virtual world from the perspective of the first virtual character during the running of the application program in the terminal.
  • the perspective picture of the first virtual character is a picture obtained by observing in the virtual world through the first-person perspective of the first virtual character.
  • the first-person perspective of the first virtual character will automatically follow along with the movement of the virtual character in the virtual world, that is, when the position of the first virtual character in the virtual world changes, the first person of the first virtual character
  • the first-person perspective changes simultaneously, and the first-person perspective of the first virtual character is always within a preset distance range of the first virtual character in the virtual world.
  • the compass information includes a sequence of azimuth scales, and the azimuth scales in the azimuth scale sequence are used to indicate the horizontal orientation that the first virtual character is facing in the virtual world.
  • the azimuth scales of various horizontal orientations that can be observed by the perspective of the first virtual character in the virtual world are displayed in the azimuth scale sequence, and the azimuth scales of horizontal orientations that cannot be observed under the current viewing angle may not be displayed in the azimuth scale. in the tick sequence.
  • a direction scale within a preset range centered on the horizontal position right in front of the first virtual character is displayed in the sequence of direction scales.
  • a first virtual character 10 a moving wheel 12 , a skill button 14 and compass information 16 are displayed on the perspective screen.
  • the first virtual character 10 may be a soldier located in the virtual world.
  • the movement wheel 12 is used to control the movement of the first virtual character 10 in the virtual world
  • the skill button 14 is used to control the first virtual character 10 to release skills or use items in the virtual world.
  • Compass information 16 is displayed with a sequence of azimuth scales.
  • the azimuth scale sequence includes 7 azimuth scales: 165 degrees, south, 195 degrees, 215 degrees, southwest, 240 degrees, 255 degrees.
  • the orientation scale of 215 degrees is used to indicate the horizontal orientation directly in front of the first virtual character 10 .
  • Step 204 controlling the activities of the first virtual character in the virtual world
  • the user can control the first virtual character 10 to move by moving the wheel 12 , and the user can also control the first virtual character to release skills or use items by pressing one or more preset skill buttons 14 .
  • the user can also control the first avatar through signals generated by long pressing, clicking, double-clicking and/or sliding on the touch screen.
  • Step 206 When the first virtual character is active in the virtual world, if there is a first sound source in the surrounding environment of the first virtual character, the first sound source is used to emit a first sound, determining a visual representation of the first sound indicator based on sound parameters of the first sound;
  • the visual representation of the first audio indicator includes at least one of the following visual representations:
  • a first visual representation for indicating the vertical orientation of the first sound
  • a fifth visual representation for indicating the frequency of motion of the first sound.
  • each visual representation is a different type of visual representation.
  • Each visual expression is one of the shape, pattern, color, texture, text, animation effect, start display time, continuous display time, and blanking time of the first sound indicator. Different visual representations can be superimposed on the same sound indicator, and different visual representations are used to convey different sound information.
  • the display start time is the moment when the first sound indicator appears on the user interface.
  • the continuous display time is the total duration of displaying the first sound indicator on the user interface.
  • the blanking time is a time period during which the first sound indicator is displayed on the user interface from the time when the transparency becomes low until it disappears.
  • the first sound source exists in the surrounding environment of the first virtual character, the first sound source will trigger a sound event. If the first virtual character is the virtual character used by the user corresponding to the current terminal, and the first sound source is the sound source corresponding to other terminals, then other terminals will synchronize the sound event to the current terminal through the server; if the first sound source is the current client The sound source corresponding to the terminal, the current terminal generates the sound event.
  • the sound event has a sound parameter
  • the terminal may determine the visual representation of the first sound indicator according to the sound parameter of the first sound.
  • Sound parameters include, but are not limited to: the type of the first sound source, the material of the first sound source, the equipment of the first sound source, the position of the first sound source, the sound type of the first sound, the volume of the first sound, At least one of the operating frequencies of the first sound source.
  • a single active object i.e. a single sound source
  • the sound with the loudest volume among the at least two sounds is determined as the second sound. a voice.
  • the firing event is determined as the first sound event, the sound indicator of the firing event is displayed, and the sound indicator of the walking event is blocked.
  • the first sound source emits a new sound event and the volume of the new sound event is greater than the volume of the current sound event
  • only the sound indicator of the new sound event is displayed.
  • the first avatar walks a few steps in place and then fires immediately. Since the volume of the firing event is greater than the volume of the walking event, the firing event is determined as the first sound event, the sound indicator of the walking event disappears, and only the sound of the firing event is displayed. Sound indicator.
  • the method provided in this embodiment further reduces Improves the accuracy of sound effect prompts by eliminating unnecessary calculations for low-volume sound events.
  • Step 208 displaying a first audible indicator with a visual representation based on the first bearing scale in the sequence of bearing scales;
  • the terminal may display a first audio indicator having at least one visual representation based on the first bearing scale in the compass information.
  • the first sound indicator may be displayed at a proper position of the user interface. For example, if the compass information is displayed above the user interface, a first sound indicator is displayed below the first azimuth scale in the compass information; if the compass information is displayed below the user interface, in the compass information The first audible indicator is displayed above the first bearing scale.
  • the center position of the first audio indicator is aligned with the first azimuth scale. That is, the central axis of the first sound indicator is aligned with the first azimuth scale.
  • the visual representation of the first sound indicator may include a first visual representation, and the first visual representation is used to indicate the vertical orientation of the first sound.
  • the visual representation of the first sound indicator may include other visual representations while including the first visual representation, that is, the terminal may have the first visual representation based on the first orientation scale display and other visual representations of the first sound indicator.
  • Other visual manifestations include at least one of the following visual manifestations:
  • a fifth visual representation for indicating the frequency of motion of the first sound.
  • the first visual representation includes n first visual representations corresponding to n vertical orientations one by one, where n is a positive integer greater than 1.
  • the client terminal displays the first sound indicator of the i-th first visual representation based on the first azimuth scale in the compass information, where i is a positive integer not greater than n.
  • the i-th first visual representation is used to indicate that the first sound source corresponds to the i-th vertical orientation.
  • the i-th visual representation may include marking the first sound indicator with an altitude corresponding to the i-th vertical orientation at the height of the first sound source.
  • the n kinds of vertical orientations may include an upper orientation, a middle orientation and a lower orientation, that is, to distinguish the upper, middle, and lower spaces through the first sound indicator.
  • the function of the first visual representation is to enable the user to know the vertical orientation of the first sound source by observing the visual representation of the first sound indicator, and there can be one or more ways to visually reflect the vertical orientation of the first sound source .
  • the first visual representation includes at least one of the following: the shape of the first sound indicator, the vertical orientation scale in the first sound indicator, the arrow in the first sound indicator, the text in the first sound indicator hint.
  • this step includes one of the following three steps:
  • the compass information 16 includes A first sound indicator with an upward shape is displayed below the first orientation scale "215" to indicate that the vertical orientation of the first sound is the upper orientation.
  • the first sound indicator is displayed in a vertically symmetrical shape, and the first sound indicator is used to indicate that the vertical azimuth of the first sound source is the middle azimuth.
  • the vertical orientation of the first sound source is the lower orientation.
  • the first visual representation includes a vertical bearing scale
  • displaying a first sound indicator with a vertical bearing scale based on the first bearing scale in the sequence of bearing scales the vertical bearing scale being used to indicate the vertical bearing of the first sound source
  • the vertical azimuth scale is used to indicate the elevation angle of the first sound source relative to the first avatar. That is, the vertical orientation scale is represented by the pitch angle of the first sound source relative to the first avatar.
  • a first sound indicator with an arrow is displayed based on the first azimuth scale in the azimuth scale sequence, and the arrow direction of the arrow is used to indicate the vertical azimuth of the first sound source.
  • an upward arrow represents an upper position whose height is higher than the plane where the first virtual character is located
  • a downward arrow represents a lower position whose height is lower than the plane where the first virtual character is located.
  • a first sound indicator with a text prompt is displayed based on the first orientation scale in the orientation scale sequence, and the text prompt is used to indicate the vertical orientation of the first sound source.
  • a combination of at least two of the above shapes, vertical orientation scales, arrows, and text prompts can also be implemented as the first visual representation, which is not limited.
  • the second visual representation is used to indicate the sound type of the first sound.
  • the manner of determining the visual expression of the first sound indicator according to the sound parameter of the first sound may be to determine the second visual expression of the first sound indicator according to the sound type of the first sound.
  • the second visual representation includes the color of the first sound indicator, that is, different colors of the first sound indicator are used to indicate the sound type of the first sound. For example, use white to represent avatar footsteps/NPC shouts, and red to represent gunshots/explosions.
  • the sound indicator 19a is white, representing the sound of footsteps; the sound indicator 19b is red, representing the sound of gunfire.
  • the second visual representation includes an icon style of the first sound indicator, that is, the first sound indicator adopts a different icon style to indicate the sound type of the first sound.
  • the gun icon style 191 is used to represent the sound type "gunshot”; the footprint icon style 192 is used to represent the sound type "footsteps”; the human head icon style 193 is used to represent the sound type "human voice”;
  • the sound type "explosion” is represented by the explosion icon style 194.
  • the second visual representation is the duration of the display of the first audio indicator.
  • the continuous display duration includes: a first duration when the first sound indicator is displayed in an opaque manner, and a second duration (that is, a blanking duration) when the display is canceled after changing from an opaque manner to a transparent manner. That is, different continuous display durations are used to indicate the sound type of the first sound. For example, different sound types correspond to different first durations, or different sound types correspond to different second durations, or different sound types correspond to different first durations and second durations.
  • the third visual representation is used to indicate the loudness of the first sound.
  • the sound level of the first sound refers to the sound level of the first sound arriving at the first virtual character, which is used to simulate the sound level of the first sound actually heard by the first virtual character, rather than the sound level of the first sound.
  • Original sound size the first sound indicator is represented by the sound wave amplitude spectrum
  • the third visual representation includes the magnitude of the first sound wave amplitude spectrum
  • the sound parameters include the sound size
  • the first sound is determined according to the sound parameters of the first sound
  • the way of visual representation of the indicator may be to determine the amplitude of the first sound wave amplitude spectrum according to the magnitude of the first sound arriving at the first virtual character.
  • the sound indicator is represented by a sound wave amplitude spectrum, and the height of the sound wave amplitude spectrum is used to represent the sound wave amplitude.
  • two sound indicators 19a and 19b are displayed on the user interface.
  • the amplitude of the sound wave of the sound indicator 19a is smaller than the amplitude of the sound wave of the sound indicator 19b
  • the magnitude of the sound corresponding to the sound indicator 19a is smaller than the magnitude of the sound corresponding to the sound indicator 19b. That is, the sound of footsteps and gunshots at the same distance.
  • the acoustic indicator 19a on the left has a smaller amplitude of the sound wave
  • the acoustic indicator 19b on the right has a greater amplitude of the sound wave.
  • the sound indicator is represented by a sound wave amplitude spectrum, and the height of the sound wave amplitude spectrum is used to represent the sound wave amplitude. For two sounds with different sound magnitudes, different sound wave amplitudes are used to represent them.
  • Fig. 9 shows that in the distance range of 100-200m, when the gunshot becomes weaker with the distance, the amplitude of the first sound indicator will also become weaker.
  • the fourth visual representation is used to indicate the sound distance of the first sound.
  • the fourth visual representation is represented by the start display time of the first sound indicator. That is, when the sound event of the first sound is received, the first sound indicator will not be displayed immediately, but will be displayed after a certain time delay. The length of the delay is related to the sound distance, which is the distance between the first sound source and the first avatar.
  • the fifth visual representation is used to indicate the frequency of action of the first sound.
  • the first sound indicator is represented by a sound wave amplitude spectrum
  • the fifth visual representation includes a vibration frequency of the sound wave amplitude spectrum
  • the height of the sound wave amplitude spectrum is used to represent the sound wave amplitude
  • the sound wave amplitude spectrum can be dynamically scaled and changed to Indicates the jitter of the sound wave. Since the range of motion for generating the first sound is different, the frequency of the motion of the first sound is also different.
  • the sound parameters include the operating frequency of the first sound source, and according to the sound parameters of the first sound, the way of determining the visual performance of the first sound indicator may be according to when the first sound source emits the first sound
  • the action frequency of determines the jitter frequency of the amplitude spectrum of the sound wave.
  • the sound wave amplitude spectrum will be displayed in white, and the jitter frequency of the sound wave amplitude spectrum will be higher to show the rush of running.
  • the jitter frequency of the sound wave amplitude spectrum will be smaller to show the feeling of moving slowly and distinguish it from running.
  • Step 210 If there is a second sound source in the surrounding environment of the first virtual character and the horizontal orientation of the second sound source is outside the range of visible orientation, based on the edge orientation scale closest to the horizontal orientation of the second sound source in the orientation scale sequence displaying a second sound indicator, the second sound indicator is used to indicate that there is a second sound source along the horizontal direction indicated by the edge direction scale;
  • the horizontal orientation also referred to as the second horizontal orientation
  • a second sound indicator is displayed based on the second orientation scale in the orientation scale sequence, for example, prompted by a second sound wave amplitude spectrum The horizontal and vertical orientation of the second sound.
  • the second sound indicator is displayed based on the edge orientation scale closest to the horizontal orientation of the second sound source in the orientation scale sequence, to Indicates the presence of a second sound source or second sound.
  • a second sound indicator 19 is displayed based on the edge azimuth scale on the compass information 18 .
  • the second sound indicator 19 may be aligned with the edge azimuth scale, or may exceed the edge azimuth scale.
  • the second sound indicator 19 is used to indicate that there is a second sound source in the invisible area on the right side of the first virtual character 10 , and the second sound source emits a second sound.
  • the second sound indicator 19 has at least one of the above five visual representations.
  • the second sound indicator 19 has an equal or less variety of visual representation than the first sound indicator.
  • the second sound indicator 19 only uses color or icon style to display the sound type of the second sound.
  • Step 212 Cancel or ignore displaying the (first) voice prompter when the first avatar enters the deaf state.
  • the terminal will cancel the display of the first sound prompter below the azimuth scale sequence on the compass information, that is, cancel the display of all sound prompters corresponding to the first virtual character.
  • the first sound thrown is a grenade or a bomb etc.
  • the grenade explodes within close range of the first avatar, the first avatar will enter a deaf state.
  • the first sound indicator when the first sound source exists in the surrounding environment of the first avatar, the first sound indicator is displayed only based on the first azimuth scale in the compass information, without small
  • the map prompts the position, and the first sound indicator can also indicate the first horizontal orientation and the first vertical orientation corresponding to the first sound source at the same time, so that the user can accurately judge the spatial position of the first sound source only by visual representation.
  • it can also obtain enough effective spatial information for the sound source.
  • the method provided in this embodiment also indicates the vertical orientation, sound type, sound size, sound distance, and action frequency of the sound through different visual representations that exist simultaneously on the first sound indicator, so that the user only needs to rely on different visual representations.
  • the vertical orientation, sound type, sound size, sound distance and action frequency of the first sound can be obtained through the display, and the effective information of the first sound can also be obtained in the hearing-limited scene without external sound or using headphones.
  • the display space on the user interface can be saved.
  • there is no small map to prompt the position, and only multiple visual representations of the first sound indicator of the compass are used to prompt the sound in various ways, which can bring a more immersive gaming experience to the user.
  • the method provided in this embodiment also improves the user’s accuracy in judging the spatial position of the first sound source only by visual representation by canceling or ignoring the display of the (first) sound prompter when the first virtual character enters the deaf state. sex.
  • step 205 may also optionally include at least one of the following steps:
  • the pitch angle (picth) of the first sound source relative to the first virtual character; based on the value range of the pitch angle, determine the vertical angle of the first sound source position.
  • the vertical orientation of the first sound source is determined by using the pitch angle, when the pitch angle of the first sound source relative to the first virtual character is between -17° to 17° or 163° to 180° or Within the range of -163° to -180°, determine that the vertical range of the first sound source is relative to the middle orientation of the first avatar; when the pitch angle of the first sound source relative to the first avatar is between 17° and 163° Within the range of , determine that the vertical orientation of the first sound source is relative to the upper orientation of the first virtual character; when the pitch angle of the first sound source relative to the first virtual character is in the range of -17° to -163°, determine The vertical orientation of the first sound source is relative to the lower orientation of the first avatar.
  • the color of the first sound indicator is determined based on the sound type of the first sound.
  • Table 1 shows the first correspondence between sound types and colors.
  • the terminal can determine the color corresponding to the sound type of the first sound, and determine the color as the color of the first sound indicator.
  • the icon style of the first sound indicator is determined according to the sound type of the first sound.
  • Table 2 shows the second corresponding relationship between sound types and icon styles.
  • the terminal can determine the icon style corresponding to the sound type of the first sound by querying the second correspondence, and determine the icon style as the icon style of the first sound indicator.
  • the third visual representation includes the amplitude magnitude of the first sound wave magnitude spectrum:
  • Step 1 Determine the arrival sound size of the first sound according to the original sound size and the influence parameter of the first sound, and the influence parameter includes at least one of the following parameters:
  • the sound-related equipment worn by the first virtual character includes: at least one of different types of helmets and earphones.
  • the type of equipment and the wearing condition of the equipment will affect the sound level of the first voice.
  • the magnitude of the arriving sound determined when the first virtual character is wearing headphones is greater than the magnitude of the arriving sound determined when the first virtual character is not wearing headphones;
  • the magnitude of the arrival sound determined in the case of the helmet is smaller than the magnitude of the arrival sound determined in the case of the first virtual character not wearing the helmet.
  • the sound-related equipment worn by the second virtual character includes: at least one of different types of firearms, different ammunition types, and mufflers.
  • the type of equipment and the wearing condition of the equipment will affect the sound level of the first voice.
  • the magnitude of the arriving sound determined when the second virtual character wears the muffler is smaller than the magnitude of the arriving sound determined when the second virtual character does not wear the muffler.
  • the sound of the first virtual character's shoes touching different grounds will affect the volume of the sound
  • the sound of the first virtual character's shoes of different materials touching the same ground will affect the volume of the sound
  • reaching sound size (original sound size*influence coefficient of original sound size)*(1-sound distance/(maximum effective distance of sound*influence coefficient of maximum distance)).
  • the magnitude of the original sound is the magnitude of the sound emitted by the first sound at the first sound source.
  • the influence coefficient of the original sound level is related to the above-mentioned influence parameters, and is usually set as an empirical value by a designer.
  • the influence coefficient at the maximum distance of the sound is used to indicate the sound attenuation characteristics, which is related to the above influence parameters and is usually set by the designer as an empirical value.
  • the first virtual character wears a sound-isolated helmet and hears a muffler gunshot from 75m.
  • the original sound volume of the gunshot is 100, and the maximum effective distance is 150m.
  • the influence coefficient of the silencer on the original sound level is 1, and the influence coefficient on the maximum distance is 0.5; the influence coefficient of the helmet on the original sound level is 0.5, and the influence coefficient on the maximum distance is 0.5.
  • Step 2 Determine the amplitude of the first sound wave amplitude spectrum according to the arrival sound level of the first sound at the first virtual character.
  • the client maps the arrival sound magnitude of the first sound to the sound wave magnitude of the first sound wave magnitude spectrum through the conversion curve of "sound magnitude-ripple magnitude".
  • the fourth visual representation includes the start display time of the first sound indicator
  • determine the start display time of the first sound indicator start displaying The time is later than the generation time of the first sound.
  • start display time first sound generation time + sound distance/sound propagation speed in the virtual environment
  • the first sound generation time is the time when the first sound source emits the first sound
  • the sound distance is the distance between the first sound source and the first virtual character
  • the sound propagation speed in the virtual environment is usually set by the designer as experience value.
  • the first sound indicator is represented by a sound wave amplitude spectrum
  • the fifth visual representation includes the vibration frequency of the sound wave amplitude spectrum
  • the vibration frequency of the sound wave amplitude spectrum is determined according to the action frequency when the first sound source emits the first sound.
  • the sound wave amplitude spectrum when the first virtual character is running, the sound wave amplitude spectrum will be displayed in white, and the jitter frequency of the sound wave amplitude spectrum will be larger to show the rush of running.
  • the jitter frequency of the sound wave amplitude spectrum When the first avatar crouches with a lowered step, the jitter frequency of the sound wave amplitude spectrum will be smaller, so as to show the feeling of moving slowly and distinguish it from running.
  • FIG. 12 shows a configuration interface 1200 for the impact coefficient of the helmet headset on the arrival sound level of the first sound.
  • the configuration interface 1200 includes three configuration items: increasing the sound wave amplitude coefficient 1201, the sound size influence coefficient 1202 and the sound maximum distance influence coefficient 1203.
  • the increase sound wave amplitude coefficient 1201 configuration item is used to configure the pair
  • the sound size influence coefficient configuration item 1202 is used to configure the influence coefficient of the arrival sound size of the first sound when the first avatar wears a helmet headset
  • the sound maximum distance influence coefficient 1203 configuration item is used It is used to configure the influence coefficient on the maximum distance that the first sound can propagate when the first avatar wears the helmet earphone.
  • the sound volume influence coefficient 1202 and the sound maximum distance influence coefficient 1203 configuration items are both 1, and when the first avatar wears a helmet headset, it has no influence on the arriving sound volume of the first sound and the maximum distance of the first sound.
  • FIG. 13 shows a configuration interface 1300 for the influence coefficient of the muffler on the magnitude of the arrival sound of the first sound.
  • the configuration interface 1300 includes two configuration items, the muffler sound size influence coefficient 1301 and the muffler sound maximum distance influence coefficient 1302.
  • the muffler sound size influence coefficient 1301 configuration item is used to configure the arrival of the first sound when the first avatar wears the muffler Influence coefficient of sound size, maximum distance influence coefficient of muffler sound 1302
  • the configuration item is used to configure the influence coefficient of the maximum distance that the first sound can travel when the first avatar wears a muffler.
  • the sound volume influence coefficient 1301 of the muffler is 1, and when the first avatar wears the muffler, it has no influence on the arrival volume of the first sound.
  • FIG. 14 shows a configuration interface 1400 of the impact coefficient of the stepping material on the magnitude of the arrival sound of the first sound.
  • the configuration interface 1400 includes two configuration items for the marble metal material 1410, the sound size influence coefficient 1401 and the sound maximum distance influence coefficient 1402.
  • the sound size influence coefficient 1401 configuration item is used to configure when the first avatar steps on the ground made of marble metal
  • the influence coefficient on the arrival sound level of the first sound the sound maximum distance influence coefficient 1402 configuration item is used to configure the influence coefficient on the maximum distance that the first sound can travel when the first avatar steps on the ground made of marble and metal.
  • the influence coefficient 1401 of the sound size and the influence coefficient 1402 of the maximum distance of the sound are both 1, and when the first avatar steps on the ground made of marble and metal, it has no effect on the arrival of the first sound and the maximum distance of the first sound .
  • FIG. 15 shows a configuration interface 1500 for different sound types and basic configuration parameters.
  • the configuration interface 1500 includes sprinting 1501, climbing 1502, squatting 1503, walking quietly 1504, climbing 1505 quietly, squatting quietly 1506, firing single shots 1507, firing bursts 1508, NPC shouting 1509, rubbing grass 1510, Injured shout 1511, dying cry 1512, explosion 1513, and walking 1514 are configured separately for different sound types, including base sound size 1520, sound icon display time 1530, sound icon fade time 1540, icon index 1550, sound icon refresh interval 1560, sound Maximum effective range 1570, sound waveform jitter frequency 1580 and opacity curve 1590 are eight configurable parameters.
  • FIG. 16 shows a configuration interface 1600 of configuration parameters corresponding to different firearm types, which are changed by addition and subtraction based on the firing type.
  • the configuration interface includes a configuration page 1600 for basic parameters of single-shot firing, a parameter configuration page 1601 for pistols and a parameter configuration page 1602 for bolt-action rifles based on single-shot firing.
  • the single-shot configuration page includes basic sound size 1620, sound icon display time 1630, sound icon fade time 1640, icon index 1650, sound icon refresh interval 1660, sound maximum effective range 1670, and sound waveform jitter frequency 1680.
  • the maximum effective range of sound on the configuration page of the pistol is 1670 configuration item -100 meters
  • the pistol is 100 meters smaller than the maximum effective range of 1670 configuration item 200 meters of single-shot fire
  • the maximum effective range of pistol sound is 100 meters
  • the basic sound Size 1620, sound icon display time 1630, sound icon fade time 1640, sound icon refresh interval 1660, and sound waveform jitter frequency 1680 are the same as single-shot firing.
  • the basic sound size 1620 configuration item is 20
  • the sound icon fade time 1640 configuration item is 0.3
  • the bolt-action rifle is 20 larger than the single-shot basic sound size 1620 configuration item
  • the sound icon fade time is 1640 configuration
  • the item length is 0.3
  • the maximum effective range of the bolt-action rifle's sound is 1670
  • the sound icon display time is 1630
  • the sound icon refresh interval is 1660
  • the sound waveform jitter frequency is 1680, which are the same as single-shot firing.
  • FIG. 17 shows a configuration interface 1700 for general configuration parameters.
  • the configuration interface 1700 includes the maximum number of sound ripples displayed 1710, Icon display angle threshold 1720, upper angle threshold 1730, lower angle threshold 1740, war sound
  • the icon display angle threshold 1720 configuration item is 90°, and by default, the sound icon of the effective sound source is displayed in the coverage area of the first avatar camera rotated 90° left and right; the mapping curve 1770 of sound ripple height and sound size, the horizontal axis is sound Intensity, the number axis is the waveform height, which can convert the arrival sound level of the first sound into the mapping curve of the first sound.
  • the method provided by this embodiment can display the first sound indicator based on the first azimuth scale in the compass information when the first sound source exists in the surrounding environment of the first avatar, and can pass the second A sound indicator simultaneously indicates the first horizontal orientation and the first vertical orientation corresponding to the first sound source, so that the user can accurately judge the spatial position of the first sound source only by visual representation, without external sound or earphones In a restricted scene, enough effective spatial information can also be obtained for the sound source.
  • the method provided in this embodiment can also further distinguish the properties of the first sound source according to the amplitude of the sound indicator and the sound wave amplitude spectrum below it, the jitter frequency, and the duration; determine the first sound source by comparing the sound source angle with the pitch angle
  • the vertical orientation of the sound source in the virtual world improves the prompt effect on the sound volume, sound frequency and sound type.
  • Fig. 18 shows a flow chart of a sound prompting method in a virtual world provided by an exemplary embodiment of the present application.
  • the method can be performed by the first terminal 120 or the second terminal 160 shown in FIG. 1, and the first terminal 120 or the second terminal 160 can be collectively referred to as a terminal.
  • the method includes the following steps:
  • Step 1802 Obtain the parameters of the first sound
  • the terminal acquires parameters of the first sound emitted by the first sound source. In other words, the terminal acquires parameters of the sound event of the first sound emitted by the first sound source.
  • Step 1804 Identify the sound type of the first sound
  • the sound type of the first sound is determined according to the sound parameters of the first sound acquired by the terminal, and the second visual representation carried by the corresponding first sound indicator is determined according to different sound types.
  • the second visual representation is color
  • the first sound is judged to be the footsteps/NPC shouts of the virtual character according to the sound parameters of the first sound acquired by the terminal, and then white is used to represent the first sound indicator; if If it is judged that the first sound is a gunshot/explosion sound, then the first sound indicator is represented by red.
  • the second visual representation is an icon style
  • the type of the first sound is judged according to the sound parameters of the first sound acquired by the terminal, and a different icon style corresponding to the first sound is used to represent the first sound indicator. For example, if it is judged that the first sound is the footsteps of the avatar, then the first sound indicator is represented by the footprint icon style; if it is judged that the first sound is gunshot, then the first sound indicator is represented by the gun icon style.
  • Step 1806a Calculate the arriving sound level according to the distance between the first sound source and the first virtual character configured according to the type of the first sound and the original sound level of the first sound source;
  • the sound event of the first sound carries the three-dimensional coordinates of the first sound source.
  • the terminal can calculate the distance between the first sound source and the first virtual character by calculating the distance between the three-dimensional coordinates of the first sound source and the three-dimensional coordinates of the first virtual character.
  • the sound size will attenuate with the distance, the longer the distance between the first sound source and the first avatar, the smaller the sound size of the first sound; the distance between the first sound source and the first avatar The shorter it is, the louder the first sound will be.
  • the sound event of the first sound also carries the original sound magnitude of the first sound source, and the terminal attenuates the original sound magnitude of the first sound by using the distance as an influencing parameter.
  • Step 1806b Calculate the size of the arrival sound according to the influencing parameters such as tactical props and equipment;
  • the terminal also determines the corresponding influence coefficient according to the influence parameters such as tactical props and equipment, and then calculates the final sound level of the first sound.
  • Exemplary, tactical props and equipment include:
  • the sound-related equipment worn by the first virtual character includes: at least one of different types of helmets and earphones.
  • the type of equipment and the wearing condition of the equipment will affect the sound level of the first voice.
  • the sound-related equipment worn by the second virtual character includes: at least one of different types of firearms, different ammunition types, and mufflers.
  • the type of equipment and the wearing condition of the equipment will affect the sound level of the first voice.
  • the sound of the first virtual character's shoes touching different grounds will affect the volume of the sound
  • the sound of the first virtual character's shoes of different materials touching the same ground will affect the volume of the sound
  • reaching sound size (original sound size*influence coefficient of original sound size)*(1-sound distance/(maximum effective distance of sound*influence coefficient of maximum distance)).
  • the magnitude of the original sound is the magnitude of the sound emitted by the first sound at the first sound source.
  • the influence coefficient of the original sound level is related to the above-mentioned influence parameters, and is usually set as an empirical value by a designer.
  • the influence coefficient at the maximum distance of the sound is used to indicate the sound attenuation characteristics, which is related to the above influence parameters and is usually set by the designer as an empirical value.
  • the first virtual character wears a sound-isolated helmet and hears a muffler gunshot from 75m.
  • the original sound volume of the gunshot is 100, and the maximum effective distance is 150m.
  • the influence coefficient of the silencer on the original sound level is 1, and the influence coefficient on the maximum distance is 0.5; the influence coefficient of the helmet on the original sound level is 0.5, and the influence coefficient on the maximum distance is 0.5.
  • step 1806a and step 1806b can be calculated at the same time, or the tactical props.
  • the sound level before the equipment is affected is used as the intermediate value, and then the final sound level after being affected by the tactical props and equipment is calculated.
  • Step 1808 Determine whether the horizontal azimuth of the first sound source is within the visible azimuth range corresponding to the azimuth scale sequence displayed in the compass information;
  • the compass information includes a sequence of azimuth scales, and the azimuth scales in the azimuth scale sequence are used to indicate the horizontal orientation that the first virtual character is facing in the virtual world.
  • the azimuth scales of various horizontal orientations that can be observed by the perspective of the first virtual character in the virtual world are displayed in the azimuth scale sequence, and the azimuth scales of horizontal orientations that cannot be observed under the current viewing angle may not be displayed in the azimuth scale. in the tick sequence.
  • a direction scale within a preset range centered on the horizontal position right in front of the first virtual character is displayed in the sequence of direction scales.
  • step 1810 executes step 1810 to represent the sound in the form of a second sound indicator; if the horizontal azimuth of the first sound source is in the compass information Within the visible azimuth range corresponding to the displayed azimuth scale sequence, step 1812 is executed to calculate whether it is above/below the first avatar according to the pitch angle.
  • Step 1810 express sound in the form of a second sound indicator
  • the second sound indicator is displayed based on the edge azimuth scale closest to the second horizontal azimuth in the azimuth scale sequence, and The sound is determined to be a second sound or a second sound source, indicated by a second sound indicator.
  • Step 1812 Calculate and judge whether the first sound source is above/below the first virtual character according to the pitch angle calculation
  • step 1814 When the first sound source is above/below the first avatar, perform step 1814 to present the sound in the form of the first sound indicator above or below, that is, when the first sound source is above the first avatar, display the upper The upper information is displayed in the form of the first sound indicator; when the first sound source is below the first virtual character, the lower information is displayed in the form of the first sound indicator below; when the first sound source is not under the first virtual character
  • step 1816 execute step 1816 to express the sound in the form of the first sound indicator in the middle, that is, the first sound source is in the middle of the first virtual character, and display the middle information in the form of the first sound indicator in the middle .
  • the pitch angle of the first sound source relative to the first avatar when the pitch angle of the first sound source relative to the first avatar is in the range of -17° to 17° or 163° to 180° or -163° to -180°, determine The first character is in the middle of the first virtual character; when the pitch angle of the first sound source relative to the first virtual character is in the range of 17° to 163°, it is determined that the first character is above the first virtual character; The pitch angle of the sound source relative to the first virtual character is in the range of -17° to -163°, and it is determined that the first character is below the first virtual character.
  • Step 1814 present the sound in the form of the first sound indicator above or below;
  • Step 1816 present the sound in the form of the middle first sound indicator
  • Step 1818 determine whether the first sound is a gunshot
  • step 1822 is executed to differentiate and present the first voice according to the character and other voice types.
  • Step 1820 Differentiate and express the first sound according to the type of gun sound
  • An exemplary terminal determines that the first sound is a gunshot according to the sound parameters of the first sound, confirms that the first sound indicator of the first sound is red or confirms that it is an icon style of a firearm, and then determines the first sound according to the first sound parameter.
  • the sound wave amplitude, sound wave vibration frequency and sound wave duration of the sound indicator are included in the first sound parameter.
  • Step 1822 Differentiate and express the first sound according to the type of the avatar and other sounds.
  • An exemplary terminal determines that the first sound source is a virtual character according to the sound parameters of the first sound, and will confirm that the first sound indicator of the first sound is white or confirmed as a footprint icon style or a human head icon style, and then according to the first sound
  • the parameters determine the amplitude of the sound wave, the frequency of the sound wave vibration and the duration of the sound wave of the first sound indicator.
  • the terminal obtains the sound parameters of the first sound to identify the type of the first sound, and according to the distance between the first sound source and the first virtual character and the first sound source The original sound level of the first sound is calculated for the arrival sound level of the first sound.
  • the horizontal and vertical azimuths of the first sound are determined, thereby improving the accuracy of the sound level and sound. Cue effects in terms of frequency and sound type.
  • Fig. 19 shows a schematic structural diagram of a sound prompting device in a virtual world provided by an exemplary embodiment of the present application.
  • the device can be implemented as all or a part of computer equipment through software, hardware or a combination of the two, and the device 1900 includes:
  • the display module 1901 is used to display the angle of view screen of the first virtual character, compass information is displayed on the angle of view screen, and the compass information includes a sequence of azimuth scales, and the azimuth scales in the sequence of azimuth scales are used to indicate the first avatar A horizontal direction that the virtual character faces in the virtual world;
  • a control module 1902 configured to control the activities of the first virtual character in the virtual world
  • the display module 1901 is configured to, when the first virtual character is active in the virtual world, if there is a first sound source in the surrounding environment of the first virtual character, based on the azimuth scale sequence
  • the first azimuth scale in displays a first sound indicator, and the first sound indicator is used to indicate the horizontal azimuth and vertical azimuth corresponding to the first sound source.
  • the display module 1901 is configured to display a first sound indicator with a visual representation based on the first azimuth scale in the sequence of azimuth scales, and the first sound indicator The central position of the indicator is aligned with the first azimuth scale, the first azimuth scale is used to indicate the horizontal azimuth of the first sound source, and the visual representation of the first sound indicator is used to indicate the first sound source The vertical orientation of the source.
  • the visual representation of the first sound indicator includes a first visual representation
  • the first visual representation is used to indicate the vertical orientation of the first sound source
  • the first The visual representation includes at least one of: the shape of the first sound indicator; a vertical orientation scale in the first sound indicator; an arrow in the first sound indicator; text prompt.
  • the first visual representation includes: n first visual representations, the n first visual representations are in one-to-one correspondence with n vertical orientations, and n is a positive integer greater than 1 ;
  • the display module 1901 is configured to display a first sound indicator with an i-th first visual representation based on the first azimuth scale in the compass information, and the i-th first visual representation is used to indicate the The first sound source corresponds to the i-th vertical orientation, and i is a positive integer not greater than n.
  • the vertical orientation includes: upper orientation, middle orientation and lower orientation
  • the display module 1901 is configured to display a first sound indicator in an upward shape based on the first azimuth scale in the azimuth scale sequence, and the first sound indicator is used to indicate the vertical direction of the first sound source.
  • the azimuth is the upper azimuth; or, based on the first azimuth scale in the azimuth scale sequence, a first sound indicator in a vertically symmetrical shape is displayed, and the first sound indicator is used to indicate the first sound source or, based on the first azimuth scale in the sequence of azimuth scales, a first audible indicator in a downward shape is displayed, the first audible indicator being used to indicate the first
  • the vertical orientation of the sound source is the lower orientation.
  • the display module 1901 is configured to display the first sound indicator with the vertical azimuth scale based on the first azimuth scale in the azimuth scale sequence, the vertical an azimuth scale for indicating a pitch angle of the first sound source relative to the first avatar; or, displaying a first sound indicator with the arrow based on the first azimuth scale in the sequence of azimuth scales , the arrow direction of the arrow is used to indicate the vertical orientation of the first sound source; or, based on the first orientation scale in the orientation scale sequence to display the first sound indicator with the text prompt, the The text prompt is used to indicate the vertical orientation of the first sound source.
  • the first sound source is used to emit a first sound
  • the visual representation of the first sound indicator also includes other visual representations, wherein the first visual representation and the The other visual representations are different types of visual representations, and the other visual representations include at least one of the following visual representations:
  • a fifth visual representation for indicating the frequency of motion of said first sound.
  • the device also includes:
  • a determining module 1903 configured to determine the visual representation of the first sound indicator according to the sound parameters of the first sound.
  • the sound parameter includes a sound type
  • the determining module 1903 is configured to:
  • the second visual representation of the first sound indicator is determined based on a sound type of the first sound.
  • the second visual representation includes a color of the first sound indicator or an icon style of the first sound indicator.
  • the first sound indicator is represented by a sound wave amplitude spectrum
  • the third visual representation includes the magnitude of the first sound wave amplitude spectrum
  • the sound parameters include sound magnitude
  • the determination module 1903 is configured to determine the amplitude of the first sound wave amplitude spectrum according to the arrival sound level of the first sound at the first virtual character.
  • the determination module 1903 is configured to determine the magnitude of the arriving sound of the first sound according to the magnitude of the original sound of the first sound and an influence parameter, and the magnitude of the influence parameter Include at least one of the following parameters:
  • the equipment worn by the second virtual character In the case where the first sound source is a second virtual character, the equipment worn by the second virtual character;
  • the material of the first sound source or the material that the first sound source comes into contact with is the material of the first sound source or the material that the first sound source comes into contact with.
  • the arriving sound level determined when the first virtual character is wearing headphones is larger than that when the first virtual character is not wearing headphones.
  • the size of the arriving sound determined in the case of headphones; or, for the same original sound size, the size of the arriving sound determined when the first avatar is wearing a helmet is smaller than that when the first avatar is not wearing a helmet.
  • the magnitude of the arriving sound determined when the helmet is worn; or, for the same original sound magnitude, the magnitude of the arriving sound determined when the second virtual character is wearing a muffler is smaller than that determined in the first virtual character
  • the fourth visual representation includes the start display time of the first sound indicator, the sound parameters include sound propagation speed, and the determining module 1903 is configured to The sound propagation speed between the first sound source and the first virtual character determines the start display time of the first sound indicator, and the start display time is later than the generation time of the first sound.
  • the first sound indicator is represented by a sound wave amplitude spectrum
  • the fifth visual representation includes the vibration frequency of the sound wave amplitude spectrum
  • the sound parameter includes sound propagation speed
  • the determination module 1903 is configured to determine the vibration frequency of the sound wave amplitude spectrum according to the action frequency when the first sound source emits the first sound.
  • the visual representation of the first sound indicator includes: at least one of shape, pattern, color, texture, text, animation effect, start display time, continuous display time, and blanking time A sort of.
  • the azimuth scale sequence in the compass information corresponds to the visible azimuth range of the first virtual character
  • the display module 1901 is configured to There is a second sound source in the surrounding environment and the horizontal azimuth of the second sound source is outside the visible azimuth range, based on the edge azimuth scale displaying in the azimuth scale sequence closest to the horizontal azimuth of the second sound source
  • a second sound indicator, the second sound indicator is used to indicate that the second sound source exists along the horizontal direction indicated by the edge direction scale.
  • the display module 1901 is configured to cancel displaying the first sound prompter when the first virtual character enters a deaf state.
  • the determining module 1903 is configured to, when the first sound source emits at least two sounds and the time difference between the generation times of the at least two sounds is less than a threshold, set the The sound with the loudest volume among the at least two sounds is determined as the first sound.
  • the present application also provides a computer device, the computer device includes a processor and a memory, at least one instruction is stored in the memory, at least one instruction is loaded and executed by the processor to realize the sound prompt of the virtual world provided by the above method embodiments method.
  • the computer device may be a terminal as shown in FIG. 20 below.
  • Fig. 20 shows a structural block diagram of a computer device 2000 provided by an exemplary embodiment of the present application.
  • the computer device 2000 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, moving picture expert compression standard audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, moving picture expert compression standard Audio level 4) player, laptop or desktop computer.
  • the computer device 2000 may also be called user equipment, portable terminal, laptop terminal, desktop terminal, and other names.
  • a computer device 2000 includes: a processor 2001 and a memory 2002 .
  • the processor 2001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • Processor 2001 can adopt at least one hardware form in DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish.
  • the processor 2001 can also include a main processor and a coprocessor, and the main processor is a processor for processing data in a wake-up state, also called a CPU (Central Processing Unit, central processing unit); the coprocessor is Low-power processor for processing data in standby state.
  • CPU Central Processing Unit, central processing unit
  • the coprocessor is Low-power processor for processing data in standby state.
  • the processor 2001 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content to be displayed on the display screen.
  • the processor 2001 may further include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, artificial intelligence
  • Memory 2002 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 2002 may also include high-speed random access memory, and non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices.
  • non-transitory computer-readable storage medium in the memory 2002 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 2001 to realize the virtual world provided by the method embodiments in this application voice prompt method.
  • the computer device 2000 may optionally further include: a peripheral device interface 2003 and at least one peripheral device.
  • the processor 2001, the memory 2002, and the peripheral device interface 2003 may be connected through buses or signal lines.
  • Each peripheral device can be connected to the peripheral device interface 2003 through a bus, a signal line or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 2004 , a display screen 2005 , a camera 2006 , an audio circuit 2007 , a positioning component 2008 and a power supply 2009 .
  • the computing device 2000 also includes one or more sensors 2010 .
  • the one or more sensors 2010 include, but are not limited to: an acceleration sensor 2011 , a gyro sensor 2012 , a pressure sensor 2013 , a fingerprint sensor 2014 , an optical sensor 2015 and a proximity sensor 2016 .
  • FIG. 20 does not constitute a limitation to the computer device 2000, and may include more or less components than shown in the figure, or combine some components, or adopt a different component arrangement.
  • a computer storage medium at least one piece of program code is stored in the computer-readable storage medium, and the program code is loaded and executed by a processor to realize the above-mentioned sound prompting method in a virtual world .
  • a computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the above-mentioned voice prompt method in the virtual world.
  • the program can be stored in a computer-readable storage medium.
  • the above-mentioned The storage medium mentioned may be a read-only memory, a magnetic disk or an optical disk, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Acoustics & Sound (AREA)
  • Radar, Positioning & Navigation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种在虚拟世界中的声音提示方法、装置、设备及存储介质,属于人机交互领域。方法包括:显示第一虚拟角色的视角画面,视角画面上显示有罗盘信息,罗盘信息中包括方位刻度序列(202);控制第一虚拟角色在虚拟世界中活动(204);在第一虚拟角色在虚拟世界中活动的过程中,若在第一虚拟角色的周围环境中存在第一声源,基于罗盘信息中的第一方位刻度显示第一声音指示器,第一声音指示器用于指示第一声源对应的水平方位和垂直方位(206)。声音提示方法能够仅采用罗盘信息中的方位刻度序列对应的第一声音指示器来提示声音,在同一个声音指示器上叠加不同视觉表现来同时指示多种声音信息。

Description

虚拟世界中的声音提示方法、装置、设备及存储介质
本申请要求于2021年08月05日提交中国专利局、申请号202110898406.6、申请名称为“虚拟世界中的声音提示方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及人机交互领域,特别涉及一种虚拟世界中的声音提示方法、装置、设备及存储介质。
背景技术
用户可以操作游戏程序中的游戏人物进行竞技对抗。游戏程序提供有虚拟世界,游戏人物是位于虚拟世界中的虚拟角色。
终端上显示有游戏画面和小地图控件。该游戏画面是以当前游戏人物的视角对虚拟世界进行观察得到的画面,小地图控件是用于显示虚拟世界的俯视地图的控件。在当前游戏人物的周围存在其它游戏人物的情况下,若其它游戏人物在虚拟世界中活动而发出声音,则小地图控件上会显示声音图标,从而根据该声音图标在小地图空间上提示枪声、脚步声、消音器声等等。例如小地图控件上的A点显示有声音图标且该声音图标是一双脚印,则表明在虚拟世界中与小地图控件上A点对应的位置上存在其它游戏人物行走。
由于小地图控件的显示面积有限,因此上述声音图标无法提供声源在虚拟世界的具体位置,所能提供的有效信息有限,用户从声音图标的提示中难以方便快捷的确定声源的位置,不利于游戏人物在虚拟世界中展开更深的对抗。
发明内容
本申请提供了一种虚拟世界中的声音提示方法、装置、设备及存储介质,可以通过声音指示器同时指示声源在虚拟世界中的水平方位和垂直方位。所述技术方案如下:
根据本申请的一方面,提供了一种虚拟世界中的声音提示方法,所述方法包括:
显示第一虚拟角色的视角画面,所述视角画面上显示有罗盘信息,所述罗盘信息中包括方位刻度序列,所述方位刻度序列中的方位刻度用于指示所述第一虚拟角色在虚拟世界中面向的水平方位;
控制所述第一虚拟角色在所述虚拟世界中活动;
在所述第一虚拟角色在所述虚拟世界中活动的过程中,若在所述第一虚拟角色的周围环境中存在第一声源,基于所述方位刻度序列中的第一方位刻度显示第一声音指示器,所述第一声音指示器用于指示所述第一声源对应的水平方位和垂直方位。
根据本申请的另一方面,提供了一种虚拟世界中的声音提示装置,所述装置包括:
显示模块,用于显示第一虚拟角色的视角画面,所述视角画面上显示有罗盘信息,所述罗盘信息中包括方位刻度序列,所述方位刻度序列中的方位刻度用于指示所述第一虚拟角色在虚拟世界中面向的水平方位;
控制模块,用于控制所述第一虚拟角色在所述虚拟世界中活动;
所述显示模块,还用于在所述第一虚拟角色在所述虚拟世界中活动的过程中,若在所述第一虚拟角色的周围环境中存在第一声源,基于所述方位刻度序列中的第一方位刻度显示第一声音指示器,所述第一声音指示器用于指示所述第一声源对应的水平方位和垂直方位。
根据本申请的另一方面,提供了一种计算机设备,所述计算机设备包括:处理器和存储器,所述存储器中存储有至少一段程序,所述至少一段程序由所述处理器加载并执行以实现如前述方面中所述的虚拟世界中的声音提示方法。
根据本申请的另一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一段程序,所述至少一段程序由处理器加载并执行以实现如前述方面中所述的虚拟世界中的声音提示方法。
根据本申请的另一方面,提供了一种计算机程序产品,当所述计算机程序产品被执行时,使得所述处理器实现如前述方面中所述的虚拟世界中的声音提示方法。
本申请提供的技术方案带来的有益效果至少包括:
通过在第一虚拟角色的周围环境中存在第一声源的情况下,由于第一虚拟角色所在的视角画面上显示有罗盘信息,罗盘信息中包括方位刻度序列,故可以基于方位刻度序列中的第一方位刻度显示第一声音指示器,能够通过第一声音指示器同时指示第一声源对应的水平方位和垂直方位,使得用户仅依靠视觉表现即可准确判断第一声源的具体空间位置,在无需外放声音或使用耳机的听觉受限场景下,也能够针对声源获取足够多的有效空间信息,利于第一虚拟角色在虚拟世界中展开更深的对抗。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一个示例性实施例提供的计算机系统的结构框图;
图2是本申请一个示例性实施例提供的虚拟世界中声音提示方法的流程图;
图3是本申请一个示例性实施例提供的虚拟环境画面的界面示意图;
图4是本申请一个示例性实施例提供的垂直方向三种声音提示的示意图;
图5是本申请一个示例性实施例提供的虚拟世界中声音提示方法的流程图;
图6是本申请一个示例性实施例提供的提示上方声音的界面示意图;
图7是本申请一个示例性实施例提供的提示两种声音类型的界面示意图;
图8是本申请一个示例性实施例提供的用图标样式提示声音的界面示意图;
图9是本申请一个示例性实施例提供的不同距离下提示声音的示意图;
图10是本申请一个示例性实施例提供的第二声音指示器的界面示意图;
图11是本申请一个示例性实施例提供的垂直方向声音确认的示意图;
图12是本申请一个示例性实施例提供的头盔耳机对声音提示影响的配置图;
图13是本申请一个示例性实施例提供的消音器对声音提示影响的配置图;
图14是本申请一个示例性实施例提供的踩踏材质对声音提示影响的配置图;
图15是本申请一个示例性实施例提供的不同声音类型影响系数的配置图;
图16是本申请一个示例性实施例提供的不同枪械类型参数的配置图;
图17是本申请一个示例性实施例提供的提示声音的通用配置参数的配置图;
图18是本申请一个示例性实施例提供的虚拟世界中声音提示方法的流程图;
图19是本申请一个示例性实施例提供的终端的结构框图;
图20是本申请一个示例性实施例提供的服务器的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
首先,对本申请实施例中涉及的名词进行介绍:
射击游戏:包含所有使用热兵器或冷兵器类远程攻击武器进行远程攻击的游戏。例如第一人称射击游戏(First Person Shooting,简称为FPS)、第三人称射击游戏(Third Person Shooting,简称为TPS)等。可选的,第一人称射击游戏是以玩家主视角进行的射击游戏,玩家不再像别的游戏类型一样操纵屏幕中显示的虚拟人物来进行游戏,而是身临其境的主视角,体验游戏带来的视觉冲击。第三人称射击游戏与第一人称射击游戏的区别在于第一人称射击游戏里屏幕上显示的只有主角的视野不同,第三人称中玩家控制的游戏人物在游戏屏幕上是可见的,更加强调动作感。
虚拟世界:是应用程序在终端上运行时显示(或提供)的虚拟世界。该虚拟世界可以是三维虚拟世界,也可以是二维虚拟世界。该三维虚拟世界可以是对真实世界的仿真环境,也可以是半仿真半虚构的环境,还可以是纯虚构的环境。下述实施例以虚拟世界是三维虚拟世界来举例说明,但对此不加以限定。可选地,该虚拟世界还用于至少两个虚拟角色之间的虚拟场景对战。可选地,该虚拟场景还用于至少两个虚拟角色之间使用虚拟枪械进行对战。
虚拟角色:是指在虚拟世界中的可活动对象。该可活动对象可以是虚拟世界中的仿真人物角色或动漫人物角色。可选的,当虚拟世界是三维虚拟环境时,虚拟对象是基于动画骨骼技术创建的三维立体模型。每个虚拟对象在三维虚拟场景中具有自身的形状和体积,占据三维虚拟场景中的一部分空间。在本申请实施例中,虚拟角色可以是虚拟世界中可以独立发出不同声音的个体,包括第一虚拟角色、第二虚拟角色等,分别代表发出不同声音的独立个体。正在发出声音的个体可以作为声源。
声音指示器:是用于指示虚拟世界中的声音信息的视觉控件,该视觉控件具有一种或多种视觉表现,每种视觉表现用于表示一种声音信息,声音信息的种类包括:水平方位、垂直方位、声音类型、声音大小、声音距离、声源的动作频率中的至少一种。
声音指示器的视觉表现:是指在声音指示器上显示的能被用户视觉捕捉到的显示效果。每种视觉表现包括:形状、图案、颜色、纹理、文字、动画效果、开始显示时间、持续显示时间、消隐时间中的一种或多种组合。不同种类的视觉表现存在不同。可选地,同一个声音指示器上同时叠加不同维度的视觉表现来呈现不同的信息。
图1示出了本申请一个示例性实施例提供的计算机系统的结构示意图。该计算机系统100包括:第一终端120、服务器集群140和第二终端160。
第一终端120安装和运行有支持虚拟环境的游戏程序。该游戏程序可以是第一人称射击游戏或第三人称射击游戏等。第一终端120可以是第一用户使用的终端,第一用户使用第一终端120操作位于虚拟世界中的第一虚拟角色进行活动,该活动包括但不限于:冲刺、爬、蹲走、静步走、静步爬、静步蹲走、单发开火、连发开火、非玩家角色(Non-Player Character,NPC)喊叫、蹭草、受伤喊叫、濒死喊叫、爆炸、走中的至少一种。示意性的,第一虚拟角色是第一虚拟人物。
第一终端120通过无线网络或有线网络与服务器集群140相连。
服务器集群140包括一台服务器、多台服务器、云计算平台和虚拟化中心中的至少一种。服务器集群140用于为支持虚拟环境的应用程序提供后台服务。可选地,服务器集群140承担主要计算工作,第一终端120和第二终端160承担次要计算工作;或者,服务器集群140承担次要计算工作,第一终端120和第二终端160承担主要计算工作;或者,服务器集群140、第一终端120和第二终端160三者之间采用分布式计算架构进行协同计算。
第二终端160安装和运行有支持虚拟环境的游戏程序。该游戏程序可以是第一人称射击游戏或第三人称射击游戏等。第二终端160可以是第二用户使用的终端,第二用户使用第二终端160操作位于虚拟世界中的第二虚拟角色进行活动,该活动包括但不限于:冲刺、爬、蹲走、静步走、静步爬、静步蹲走、单发开火、连发开火、NPC喊叫、蹭草、受伤喊叫、濒死喊叫、爆炸、走中的至少一种。示意性的,第二虚拟角色是第二虚拟人物。第一虚拟角色和第二虚拟角色可以属于同一个队伍、同一个组织、具有好友关系或具有临时性的通讯权限。
可选地,第一终端120和第二终端160上安装的应用程序是相同的,或不同平台的同一类型应用程序。第一终端120可以泛指多个终端中的一个,第二终端160可以泛指多个终端中的一个,本实施例仅以第一终端120和第二终端160来举例说明。
第一终端120和第二终端160可以是台式设备或移动设备。在第一终端120和第二终端160是移动设备的情况下,第一终端120和第二终端160的移动设备的类型相同或不同,该移动设备包括:智能手机、平板电脑,以及包含但不仅限于此的所有便携式电子设备。
图2示出了本申请一个示例性实施例提供的虚拟世界中的声音提示方法的流程图。该方法可以由图1所示的第一终端120或第二终端160执行,第一终端120或第二终端160可以统称为终端,该方法包括以下步骤:
步骤202:显示第一虚拟角色的视角画面,视角画面上显示有罗盘信息,罗盘信息包括方位刻度序列;
罗盘信息(或罗盘控件)用于以第一虚拟角色在虚拟世界中的落脚点为参考点,指示在虚拟世界中第一虚拟角色所面向的各个水平方位。示例性的,该水平方位采用虚拟世界中的经度来表示,例如东经20度,西经160度等。
示例性的,如图3所示,视角画面上显示有第一虚拟角色10、移动轮盘12、技能按钮14以及罗盘信息16。第一虚拟角色10可以是在虚拟世界中的任一可活动对象,例如可以是位于虚拟世界中的士兵。移动轮盘12用于控制第一虚拟角色10在虚拟世界中的移动,技能按钮14用于控制第一虚拟角色10在虚拟世界中释放技能或使用物品。罗盘信息16显 示有方位刻度序列,方位刻度序列可以是由多个方位刻度组成的序列,方位刻度序列中的方位刻度用于指示所述第一虚拟角色在虚拟世界中面向的水平方位。在图3中,该方位刻度序列包括7个方位刻度:165度、南、195度、215度、西南、240度、255度。其中,方位刻度215度用于指示位于第一虚拟角色10正前方的水平方位。
步骤204:控制第一虚拟角色在虚拟世界中活动;
在本申请实施例中,用户可以控制第一虚拟角色在虚拟世界中活动。这里的活动可以包括多种形式的活动,例如移动、释放技能、使用物品等。不同的活动可以有不同的控制方式,例如用户可以通过移动轮盘12控制第一虚拟角色10进行移动,用户也可以通过按压一个或多个预设的技能按钮14控制第一虚拟角色释放技能或使用物品。用户还可以通过在触摸屏上进行长按、点击、双击和/或滑动所产生的信号控制第一虚拟角色。
步骤206:在第一虚拟角色在虚拟世界中活动的过程中,若在第一虚拟角色的周围环境中存在第一声源,基于方位刻度序列中的第一方位刻度显示第一声音指示器,第一声音指示器用于指示第一声源的水平方位和垂直方位。
示意性的,第一虚拟角色的周围环境是以第一虚拟角色为中心,预设距离为半径的三维球形范围内的虚拟环境。或者,第一虚拟角色的周围环境是以第一虚拟角色为中心,预设距离为半径且位于地平面上的三维半球形范围内的虚拟环境。
第一声源可以是虚拟世界中具有声音发出能力的虚拟元素,例如可以是第二虚拟角色(友方或敌方或NPC)、虚拟车辆、虚拟飞行物、各种攻击性武器、虚拟动物等。第二虚拟角色可以是在虚拟世界中除第一虚拟角色之外的其他虚拟角色,第二虚拟角色的数量至少为一个。由于虚拟世界是数字化模拟出的环境,因此本申请中的声音可以是指数字化世界中的声音事件,该声音事件采用一组参数来表示。声音事件的一组参数包括但不限于:声源在虚拟世界中的三维坐标、声源的类型、声源接触到的材质类型、声音类型、原始声音大小、声源的装备佩戴情况中的至少一种。
若在第一虚拟角色的周围环境中存在第一声源,则表示周围环境中存在虚拟元素(也可以称为虚拟世界中的个体)发出声音(例如第一声音),该虚拟元素(个体)即为第一声源,故可以基于方位刻度序列中的第一方位刻度显示第一声音指示器。第一声音指示器用于指示第一声源的水平方位和垂直方位,水平方位是以第一虚拟对象为中心,沿水平方向划分的方位,例如虚拟世界中的经度。垂直方位是以第一虚拟对象为中心,沿垂直方向划分的方位,例如声源(例如第一声源)相对于第一虚拟角色的俯仰角度。可选地,垂直方位采用垂直方位刻度来表示,类似于纬度;或者,垂直方位采用海拔高度来表示;或者,由于虚拟角色在垂直方向上的活动空间有限,因此垂直方位可简化或抽象为:上部方位、中部方位和下部方位。
在本申请实施例中,第一声源的水平方位可以是第一水平方位,第一声源的垂直方位可以是第一垂直方位,第一声源的第一水平方位可以由第一方位刻度所指示,第一声源的第一垂直方位可以由第一声音指示器的第一视觉表现来表示,第一视觉表现可以是第一声音指示器的形状、图案、颜色、纹理、文字和动画效果中的至少一种。
示意性的,终端可以基于方位刻度序列中的第一方位刻度显示具有第一视觉表现的第一声音指示器,第一声音指示器的中心位置与第一方位刻度对齐,第一方位刻度用于指示所述第一声源的水平方位,第一视觉表现用于指示第一声源的垂直方位。
综上所述,本实施例提供的方法,通过在第一虚拟角色的周围环境中存在第一声源的情况下,仅基于方位刻度序列中的第一方位刻度显示第一声音指示器,没有小地图来提示位置,也能够通过第一声音指示器同时指示第一声源对应的第一水平方位和第一垂直方位,使得用户仅依靠视觉表现即可准确判断第一声源的空间位置,在无需外放声音或使用耳机的听觉受限场景下,也能够针对声源获取足够多的有效空间信息。示例性的如图3所示,在第一虚拟角色10的周围环境存在第一声源发出第一声音的情况下,基于罗盘信息16中的第一方位刻度“西南”显示第一声音指示器19。其中,第一方位刻度“西南”用于指示第一声源的水平方位是西南方位。第一声音指示器19的形状用于指示第一声源的垂直方位是中部方位。
例如,以第一视觉表现是形状为例,结合参考图4的(a)实现方式,在第一声音指示器19的形状呈上三角形的情况下,代表第一声源位于第一虚拟角色的上部方位;在第一声音指示器19的形状呈梭形的情况下,代表第一声源位于第一虚拟角色的中部方位;在第一声音指示器19的形状在第一声音指示器19的形状呈下三角形的情况下,代表第一声源位于第一虚拟角色的下部方位。
例如,以第一视觉表现是第一声音指示器19的左侧的箭头为例,结合参考图4的(b)实现方式,在第一声音指示器19左侧的箭头向上的情况下,代表第一声源位于第一虚拟角色的上部方位;在第一声音指示器19左侧的箭头为圆形的情况下,代表第一声源位于第一虚拟角色的中部方位;在第一声音指示器19左侧的箭头向下的情况下,代表第一声源位于第一虚拟角色的下部方位。
例如,以第一视觉表现是第一声音指示器19的填充色为例,结合参考图4的(c)实现方式,该第一声音指示器19包括纵向排列的三个格子。在三个格子的最上面格子具有填充色的情况下,代表第一声源位于第一虚拟角色的上部方位;在三个格子的中间格子具有填充色的情况下,代表第一声源位于第一虚拟角色的中部方位;在三个格子的最下面格子具有填充色的情况下,代表第一声源位于第一虚拟角色的下部方位。
例如,以第一视觉表现是第一声音指示器19的形状和附加数字为例,结合参考图4的(d)实现方式,在第一声音指示器19的形状呈上三角形以及携带数字“+100m”的情况下,代表第一声源位于第一虚拟角色的上部方位且高于地面的高度为100米;在第一声音指示器19的形状呈梭形的情况下,代表第一声源位于第一虚拟角色的中部方位;在第一声音指示器19的形状在第一声音指示器19的形状呈下三角形以及携带数字“-15m”的情况下,代表第一声源位于第一虚拟角色的下部方位且低于地面的高度为15米。
本实施例提供的方法,能够在用户界面上没有小地图控件的情况下,基于罗盘信息附近的第一声音指示器来提示第一声源的各种声音信息。在存在第一声源的情况下,能够基于第一声音指示器上的多种视觉表现来对声音进行多方面的提示,而且占用非常小的屏幕区域;在不存在第一声源的情况下,整个用户界面上的平视显示器(Head Up Display,HUD) 控件会尽可能地少,从而使得用户界面更加简洁有效,可以给用户带来更沉浸的程序使用体验。
图5示出了本申请一个示例性实施例提供的虚拟世界中的声音提示方法的流程图。该方法可以由图1所示的第一终端120或第二终端160执行,第一终端120或第二终端160可以统称为终端,该方法包括以下步骤:
步骤202:显示第一虚拟角色的视角画面,视角画面上显示有罗盘信息,罗盘信息包括方位刻度序列;
第一虚拟角色是第一用户在控制的虚拟对象。第一虚拟角色的视角画面是终端中的应用程序在运行过程中,以第一虚拟角色的视角对虚拟世界进行观察所获得的画面。可选的,第一虚拟角色的视角画面是通过第一虚拟角色的第一人称视角在虚拟世界中进行观察所获得的画面。
可选的,第一虚拟角色的第一人称视角在虚拟世界中会随着虚拟角色的移动进行自动跟随,即,当第一虚拟角色在虚拟世界中的位置发生改变时,第一虚拟角色的第一人称视角同时发生改变,且第一虚拟角色的第一人称视角在虚拟世界中始终处于第一虚拟角色的预设距离范围内。
罗盘信息包括方位刻度序列,该方位刻度序列中的方位刻度用于指示第一虚拟角色在虚拟世界中所面向的水平方位。示例性的,将第一虚拟角色的视角在虚拟世界中能够观察到的各个水平方位的方位刻度显示在方位刻度序列中,当前视角下未能观察到的水平方位的方位刻度可以不显示在方位刻度序列中。或者,以位于第一虚拟角色正前方的水平方位为中心的预设范围内的方向刻度,显示在方位刻度序列中。
示例性的,如图3所示,视角画面上显示有第一虚拟角色10、移动轮盘12、技能按钮14以及罗盘信息16。第一虚拟角色10可以是位于虚拟世界中的士兵。移动轮盘12用于控制第一虚拟角色10在虚拟世界中的移动,技能按钮14用于控制第一虚拟角色10在虚拟世界中释放技能或使用物品。罗盘信息16显示有方位刻度序列。该方位刻度序列包括7个方位刻度:165度、南、195度、215度、西南、240度、255度。其中,方位刻度215度用于指示位于第一虚拟角色10正前方的水平方位。
步骤204:控制第一虚拟角色在虚拟世界中活动;
用户可以通过移动轮盘12控制第一虚拟角色10进行移动,用户也可以通过按压一个或多个预设的技能按钮14控制第一虚拟角色释放技能或使用物品。用户还可以通过在触摸屏上进行长按、点击、双击和/或滑动所产生的信号控制第一虚拟角色。
步骤206:在所述第一虚拟角色在所述虚拟世界中活动的过程中,若在第一虚拟角色的周围环境中存在第一声源,所述第一声源用于发出第一声音,根据第一声音的声音参数确定第一声音指示器的视觉表现;
第一声音指示器的视觉表现包括如下视觉表现中的至少一种:
用于指示第一声音的垂直方位的第一视觉表现;
用于指示第一声音的声音类型的第二视觉表现;
用于指示第一声音的声音大小的第三视觉表现;
用于指示第一声音的声音距离的第四视觉表现;
用于指示第一声音的动作频率的第五视觉表现。
其中,每种视觉表现是不同类型的视觉表现。每种视觉表现是第一声音指示器的形状、图案、颜色、纹理、文字、动画效果、开始显示时间、持续显示时间、消隐时间中的一种。不同的视觉表现可以在同一个声音指示器上叠加,不同的视觉表现用于传递不同的声音信息。
其中,开始显示时间是第一声音指示器在用户界面上出现的时刻。持续显示时间是第一声音指示器在用户界面上显示的总时长。消隐时间是第一声音指示器在用户界面上从透明度开始变低直至消失显示的时长。
在第一虚拟角色的周围环境中存在第一声源的情况下,第一声源会触发声音事件。若第一虚拟角色是当前终端所对应用户所使用的虚拟角色,第一声源是其它终端对应的声源,则其它终端将声音事件通过服务器同步至当前终端;若第一声源是当前客终端对应的声源,则当前终端产生该声音事件。
该声音事件具有声音参数,终端可以根据第一声音的声音参数确定第一声音指示器的视觉表现。声音参数包括但不限于:第一声源的类型、第一声源的材质、第一声源的装备情况、第一声源的位置、第一声音的声音类型、第一声音的声音大小、第一声源的动作频率中的至少一种。
在一些可能的情况下,虚拟世界中存在很多活动对象,这些活动对象很多都可以发出声音,可以作为第一声源。即使是一个活动对象(即一个声源),也可能发出多个声音。为了便于准确的识别第一声源发出的声音,在第一声源发出至少两个声音且至少两个声音的生成时间差小于阈值的情况下,将至少两个声音中音量最大的声音确定为第一声音。
例如,在第一虚拟角色边行走边开火的情况下,由于开火事件的音量大于行走事件,将开火事件确定为第一声音事件,显示开火事件的声音指示器,屏蔽行走事件的声音指示器。
例如,在第一声源发出新声音事件且新声音事件的音量大于当前声音事件的音量的情况下,只显示新声音事件的声音指示器。例如,第一虚拟角色在原地行走几步路又立刻开火,由于开火事件的音量大于行走事件的音量,将开火事件确定为第一声音事件,走路事件的声音指示器消失,只显示开火事件的声音指示器。
本实施例提供的方法,还通过在第一声源发出至少两个声音且至少两个声音的生成时间差小于阈值的情况下,将至少两个声音中音量最大的声音确定为第一声音,减少了对音量较低声音事件的不必要的计算量,提高了对声音效果提示的准确性。
步骤208:基于方位刻度序列中的第一方位刻度显示具有视觉表现的第一声音指示器;
终端可以基于罗盘信息中的第一方位刻度,显示具有至少一种视觉表现的第一声音指示器。
其中,根据罗盘信息在用户界面的显示位置不同,可以将第一声音指示器显示在用户界面的合适位置。例如,在罗盘信息显示在用户界面的上方的情况下,在罗盘信息中的第一方位刻度的下方显示第一声音指示器;在罗盘信息显示在用户界面的下方的情况下,在 罗盘信息中的第一方位刻度的上方显示第一声音指示器。可选地,第一声音指示器的中心位置与第一方位刻度对齐。也即,第一声音指示器的中轴线与第一方位刻度对齐。
示意性的,第一声音指示器的视觉表现可以包括第一视觉表现,第一视觉表现用于指示第一声音的垂直方位。在一种可能的实现方式中,第一声音指示器的视觉表现在包括第一视觉表现的同时,还可以包括其它视觉表现,也就是说,终端可以基于第一方位刻度显示具有第一视觉表现和其它视觉表现的第一声音指示器。其它视觉表现包括如下视觉表现中的至少一种:
用于指示第一声音的声音类型的第二视觉表现;
用于指示第一声音的声音大小的第三视觉表现;
用于指示第一声音的声音距离的第四视觉表现;
用于指示第一声音的动作频率的第五视觉表现。
针对第一视觉表现:
第一视觉表现包括n种第一视觉表现,与n种垂直方位一一对应,n为大于1的正整数。客户端基于罗盘信息中的第一方位刻度显示第i种第一视觉表现的第一声音指示器,i为不大于n的正整数。第i种第一视觉表现用于指示第一声源对应第i种垂直方位。示例性的,第i种视觉表现可以包括用高度标记第一声音指示器来对应第一声源所在高度的第i种垂直方位。示例性的,n种垂直方位可以包括上部方位、中部方位和下部方位,即实现通过第一声音指示器区分上、中、下三层空间。
第一视觉表现的作用是使得用户通过观察第一声音指示器的视觉表现,可以知晓第一声源的垂直方位,而在视觉上体现第一声源的垂直方位的方式可以有一种或多种。示例性的,第一视觉表现包括如下至少一种:第一声音指示器的形状,第一声音指示器中的垂直方位刻度,第一声音指示器中的箭头,第一声音指示器中的文字提示。
以第一视觉表现包括形状为例,本步骤包括如下三个步骤中的一种:
·基于方位刻度序列中的第一方位刻度显示呈向上形状的第一声音指示器,第一声音指示器用于指示第一声源的垂直方位为上部方位。
示意性的参考图6,假设第一声源是位于楼顶的敌方虚拟角色,由于敌方虚拟角色在垂直方向上的高度位于第一虚拟角色10的上部方位,因此在罗盘信息16中的第一方位刻度“215”下方显示具有向上形状的第一声音指示器,以指示第一声音的垂直方位为上部方位。
·基于方位刻度序列中的第一方位刻度显示呈上下对称形状的第一声音指示器,第一声音指示器用于指示第一声源的垂直方位为中部方位。
·基于方位刻度序列中的第一方位刻度显示呈向下形状的第一声音指示器,第一声音指示器用于指示第一声源的垂直方位为下部方位。
与图6相反的是,若第一虚拟角色10位于楼顶,而敌方虚拟角色位于楼下地面上,则第一声源的垂直方位为下部方位。
以第一视觉表现包括垂直方位刻度为例,基于方位刻度序列中的第一方位刻度显示具有垂直方位刻度的第一声音指示器,垂直方位刻度用于指示第一声源的垂直方位,或者, 垂直方位刻度用于指示第一声源相对于第一虚拟角色的俯仰角。也即,该垂直方位刻度采用第一声源相对于第一虚拟角色的俯仰角来表示。
以第一视觉表现包括箭头为例,基于方位刻度序列中的第一方位刻度显示具有箭头的第一声音指示器,箭头的箭头方向用于指示第一声源的垂直方位。比如,向上箭头代表高度高于第一虚拟角色所在平面的上部方位,向下箭头代表高度低于第一虚拟角色所在平面的下部方位。
以第一视觉表现包括文字提示为例,基于方位刻度序列中的第一方位刻度显示具有文字提示的第一声音指示器,文字提示用于指示第一声源的垂直方位。
上述形状、垂直方位刻度、箭头和文字提示中的至少两种进行组合,也可以实现成为第一视觉表现,对此不加以限定。
针对第二视觉表现:
第二视觉表现用于指示第一声音的声音类型。在这种情况下,根据第一声音的声音参数,确定第一声音指示器的视觉表现的方式可以是根据第一声音的声音类型,确定第一声音指示器的第二视觉表现。
在一些实施例中,第二视觉表现包括第一声音指示器的颜色,也即采用不同颜色的第一声音指示器来指示第一声音的声音类型。例如,采用白色表示虚拟角色的脚步声/NPC呐喊声,采用红色表示枪声/爆炸声。
示意性的参考图7,针对不同声源发出的两个声音,用户界面上显示两个声音指示器19a和19b。声音指示器19a为白色,代表脚步声;声音指示器19b为红色,代表枪声。
在一些实施例中,第二视觉表现包括第一声音指示器的图标样式,也即采用不同图标样式的第一声音指示器来指示第一声音的声音类型。
示意性的参考图8,采用枪械图标样式191来表示声音类型“枪声”;采用脚印图标样式192来表示声音类型“脚步声”;采用人头图标样式193来表示声音类型“人声语音”;采用爆炸图标样式194来表示声音类型“爆炸声”。
在一些实施例中,第二视觉表现为第一声音指示器的持续显示时长。该持续显示时长包括:第一声音指示器采用不透明方式显示的第一时长,以及将不透明方式渐变为透明方式而取消显示的第二时长(也即消隐时长)。也即,采用不同的持续显示时长来指示第一声音的声音类型。比如,不同的声音类型对应的第一时长不同,或者,不同的声音类型对应的第二时长不同,或者,不同的声音类型对应的第一时长和第二时长均不同。
针对第三视觉表现:
第三视觉表现用于指示第一声音的声音大小。示意性的,第一声音的声音大小是指第一声音在第一虚拟角色处的到达声音大小,用于模拟第一虚拟角色实际听到的第一声音的声音大小,而非第一声音的原始声音大小。在这种情况下,第一声音指示器采用声波幅度谱表示,第三视觉表现包括第一声波幅度谱的幅度大小,声音参数包括声音大小,根据第一声音的声音参数,确定第一声音指示器的视觉表现的方式可以是根据第一声音在第一虚拟角色处的到达声音大小,确定第一声波幅度谱的幅度大小。
示意性的参考图7,声音指示器采用声波幅度谱来表示,声波幅度谱的高度用于表示声波幅度。针对不同声源发出的两个声音,用户界面上显示两个声音指示器19a和19b。其中,声音指示器19a的声波幅度小于声音指示器19b的声波幅度,则声音指示器19a对应的声音大小小于声音指示器19b对应的声音大小。也即,同样距离下的脚步声和枪声。左侧的声音指示器19a的声波幅度更小,右侧的声音指示器19b的声波幅度更大。
示意性的参考图9,声音指示器采用声波幅度谱来表示,声波幅度谱的高度用于表示声波幅度。针对不同声音大小的两个声音,采用不同的声波幅度来表示。图9示出了在100-200m距离范围下,枪声随距离变弱时,第一声音指示器的幅度也会变弱。
针对第四视觉表现:
第四视觉表现用于指示第一声音的声音距离。示意性的,第四视觉表现采用第一声音指示器的开始显示时间来表示。也即,当接收到第一声音的声音事件后,并不会马上显示第一声音指示器,而是延迟一定时间后才显示第一声音指示器。延迟的时间长度与声音距离有关,该声音距离是第一声源和第一虚拟角色之间的距离。
针对第五视觉表现:
第五视觉表现用于指示第一声音的动作频率。示意性的,第一声音指示器采用声波幅度谱来表示,第五视觉表现包括声波幅度谱的抖动频率,声波幅度谱的高度用于表示声波幅度,该声波幅度谱能够动态地伸缩变化,以表示声波的抖动。由于产生第一声音的动作幅度不同,因此第一声音的动作频率也不同。在这种情况下,声音参数包括第一声源的动作频率,根据第一声音的声音参数,确定所述第一声音指示器的视觉表现的方式可以是根据第一声源发出第一声音时的动作频率,确定声波幅度谱的抖动频率。
比如位于第一声源处的虚拟角色在跑步时,声波幅度谱会采用白色显示,声波幅度谱的抖动频率会更大一些,以显示出跑步的急促。当位于第一声源处的虚拟角色放低步伐蹲走时,声波幅度谱的抖动频率会更小一些,以显示出缓慢前进的感觉,并与跑步做区分。
步骤210:若第一虚拟角色的周围环境中存在第二声源且第二声源的水平方位位于可见方位范围之外,基于方位刻度序列中与第二声源的水平方位最近的边缘方位刻度显示第二声音指示器,第二声音指示器用于表示沿边缘方位刻度所指示的水平方位上存在第二声源;
在第一虚拟角色的周围环境还可存在第二声源,在方位刻度序列下方显示第二声音指示器之前,判断第二声源对应的水平方位(也可以称为第二水平方位)是否处于第一虚拟角色的可见方位范围。
在第二声源对应的水平方位处于第一虚拟角色的可见方位范围内的情况下,基于在方位刻度序列中的第二方位刻度显示第二声音指示器,例如通过第二声波幅度谱来提示第二声音的水平方位和垂直方位。
在第二声源对应的水平方位处于第一虚拟角色的可见方位范围之外的情况下,基于方位刻度序列中与第二声源的水平方位最近的边缘方位刻度显示第二声音指示器,来提示存在第二声源或第二声音。
示意性的如图10所示,在第一虚拟角色10的可视范围内,基于罗盘信息18上的边缘方位刻度显示第二声音指示器19。该第二声音指示器19可以与边缘方位刻度对齐,也可以超出边缘方位刻度。该第二声音指示器19用于表示在第一虚拟角色10的右侧不可见区域中具有第二声源,且该第二声源发出第二声音。
可选地,第二声音指示器19具有上述五种视觉表现中的至少一种。第二声音指示器19具有的视觉表现种类等于或少于第一声音指示器。比如,第二声音指示器19仅使用颜色或图标样式来显示第二声音的声音类型。
步骤212:在第一虚拟角色进入失聪状态的情况下,取消或忽略显示(第一)声音提示器。
在第一虚拟角色被第一声音投掷物攻击的情况下,第一虚拟角色由于接收到的声音过大而处于失聪状态,第一虚拟角色在终端中的表现为听不见声音。在第一虚拟角色处于失聪状态的过程中,终端将取消显示罗盘信息上方位刻度序列下方的第一声音提示器,也即取消显示第一虚拟角色对应的全部声音提示器。
第一声音投掷物是手雷或炸弹等。当手雷在第一虚拟角色的近距离爆炸时,第一虚拟角色会进入失聪状态。
综上所述,本实施例提供的方法,通过在第一虚拟角色的周围环境中存在第一声源的情况下,仅基于罗盘信息中的第一方位刻度显示第一声音指示器,没有小地图来提示位置,也能够通过第一声音指示器同时指示第一声源对应的第一水平方位和第一垂直方位,使得用户仅依靠视觉表现即可准确判断第一声源的空间位置,在无需外放声音或使用耳机的听觉受限场景下,也能够针对声源获取足够多的有效空间信息。
本实施例提供的方法,还通过第一声音指示器上同时存在的不同视觉表现,分别对声音的垂直方位、声音类型、声音大小、声音距离和动作频率进行指示,使得用户仅靠不同的视觉表现即可获取到第一声音的垂直方位、声音类型、声音大小、声音距离和动作频率,在无需外声音或使用耳机的听觉受限场景下,也能够获取第一声音的有效信息。同时,由于第一声音指示器在用户界面上占用的区域非常小,因此能够节省用户界面上的显示空间。而且没有小地图来提示位置,仅基于罗盘的第一声音指示器的多种视觉表现来对声音进行多方面的提示,可以给用户带来更沉浸的游戏体验。
本实施例提供的方法,还通过在第一虚拟角色进入失聪状态的情况下,取消或忽略显示(第一)声音提示器,提高了用户仅依靠视觉表现判断第一声源的空间位置的准确性。
针对上述五种视觉表现,上述步骤205还可选包括如下步骤中的至少一种:
针对第一视觉表现:
根据第一声源与第一虚拟角色在虚拟环境中的位置,计算第一声源相对于第一虚拟角色的俯仰角(picth);基于俯仰角的取值范围,确定第一声源的垂直方位。
示例性的如图11所示,采用俯仰角来确定第一声源的垂直方位,在第一声源相对于第一虚拟角色的俯仰角处于-17°到17°或者163°到180°或者-163°到-180°的范围内,确定第一声源的垂直范围是相对于第一虚拟角色的中间方位;在第一声源相对于第一虚拟角色的俯仰角处于17°到163°的范围内,确定第一声源的垂直方位是相对于第一虚拟角色的上部方 位;在第一声源相对于第一虚拟角色的俯仰角处于-17°到-163°的范围内,确定第一声源的垂直方位是相对于第一虚拟角色的下部方位。
针对第二视觉表现:
在第二视觉表现包括第一声音指示器的颜色的情况下,根据第一声音的声音类型,确定第一声音指示器的颜色。
示例性的如表一所示,表一示出了声音类型和颜色之间的第一对应关系。
表一
颜色 声音类型
白色 脚步声
红色 枪声
橙色 炸弹声
蓝色 喊叫声
终端通过查询第一对应关系,能够确定出与第一声音的声音类型对应的颜色,将该颜色确定为第一声音指示器的颜色。
在第二视觉表现包括第一声音指示器的图标样式的情况下,根据第一声音的声音类型,确定第一声音指示器的图标样式。
示例性的如表二所示,表二示出了声音类型和图标样式之间的第二对应关系。
表二
图标样式 声音类型
脚印图标样式 脚步声
枪械图标样式 枪声
火焰图标样式 炸弹声
人头图标样式 喊叫声
终端通过查询第二对应关系,能够确定出与第一声音的声音类型对应的图标样式,将该图标样式确定为第一声音指示器的图标样式。
针对第三视觉表现:
在第一声音指示器采用声波幅度谱表示,第三视觉表现包括第一声波幅度谱的幅度大小的情况下:
步骤1:根据第一声音的原始声音大小和影响参数,确定第一声音的到达声音大小,影响参数包括如下参数中的至少一种:
·第一声源与第一虚拟角色之间的距离;
声音大小会随着距离传播而衰减。因此,第一声源与第一虚拟角色之间的距离越长,第一声音的声音大小越小;第一声源与第一虚拟角色之间的距离越短,第一声音的声音大小越大。
·第一虚拟角色佩戴的装备;
第一虚拟角色佩戴的与声音大小有关的装备包括:不同类型的头盔以及耳机中的至少一种。装备类型以及装备的佩戴情况,均会影响第一声音的声音大小。
示例性的,对于同一个原始声音的大小,在第一虚拟角色佩带有耳机的情况确定的到达声音大小大于第一虚拟角色未佩戴耳机的情况确定的到达声音大小;第一虚拟角色在佩带有头盔的情况确定的到达声音大小小于第一虚拟角色未佩戴头盔的情况确定的到达声音大小。
·在第一声源为第二虚拟角色的情况下,第二虚拟角色佩戴的装备;
第二虚拟角色佩戴的与声音大小有关的装备包括:不同类型的枪械,不同的弹药类型,消声器中的至少一种。装备类型以及装备的佩戴情况,均会影响第一声音的声音大小。
示例性的,对于同一个原始声音的大小,在第二虚拟角色佩带有消声器的情况确定的到达声音大小小于第二虚拟角色未佩戴消声器的情况确定的到达声音大小。
·第一声源的材质或第一声源接触到的材质。
比如,第一虚拟角色的鞋子与不同地面触碰的声音会影响声音的大小,第一虚拟角色的不同材质的鞋子与同一种地面触碰的声音会影响声音的大小。
示意性的,到达声音大小=(原始声音大小*原始声音大小的影响系数)*(1-声音距离/(声音最大有效距离*最大距离的影响系数))。
其中,原始声音大小为第一声音在第一声源处发出的声音大小。示意性的,原始声音大小的影响系数与上述影响参数有关,通常由设计师设置为经验值。声音最大距离下的影响系数用于指示声音衰减特性,与上述影响参数有关,通常由设计师设置为经验值。
比如:假设第一虚拟角色带着隔绝声音的头盔,听到了从75m发出的消音器的枪声,枪声的原始声音大小为100,最大有效距离为150m。消音器对原始声音大小的影响系数为1,对最大距离下的影响系数是0.5;头盔对原始声音大小的影响系数为0.5,对最大距离的影响系数为0.5。
到达声音大小=(100*1*0.5)*(1-75/(150*0.5*0.5))=50*-1=-50=负数归零=听不到声音,所以不在第一声音指示器上显示波纹。
又比如:假设第一虚拟角色带着隔绝声音的头盔,听到了从30m发出的消音器的枪声,枪声的原始声音大小为100,最大有效距离为150m。
到达声音大小=(100*0.5)*(1-30/(150*0.5*0.5))=50*0.2=25。
步骤2:根据第一声音在第一虚拟角色处的到达声音大小,确定第一声波幅度谱的幅度大小。
客户端通过“声音大小-波纹幅度”的转换曲线,将第一声音的到达声音大小映射成第一声波幅度谱的声波幅度。
比如,距离玩家150m的步枪开火声为100*(1-150/200)=25,数值25经过转换曲线转换成声波幅度为0.22,以0.22的倍率影响第一声波幅度谱的声波幅度。
针对第四视觉表现:
·在第四视觉表现包括第一声音指示器的开始显示时间的情况下,基于第一声源和第一虚拟角色之间的声音传播速度,确定第一声音指示器的开始显示时间,开始显示时间晚于第一声音的产生时间。
示意性的,开始显示时间=第一声音的产生时间+声音距离/虚拟环境中的声音传播速度;
其中,第一声音产生时间为第一声源发出第一声音的时间,声音距离为第一声源和第一虚拟角色之间的距离,虚拟环境中的声音传播速度通常由设计师设置为经验值。
针对第五视觉表现:
·在第一声音指示器采用声波幅度谱表示,第五视觉表现包括声波幅度谱的抖动频率,根据第一声源发出第一声音时的动作频率,确定声波幅度谱的抖动频率。
示意性的,当第一虚拟角色在跑步时,声波幅度谱会采用白色显示,声波幅度谱的抖动频率会更大一些,以显示出跑步的急促。当第一虚拟角色放低步伐蹲走时,声波幅度谱的抖动频率会更小一些,以显示出缓慢前进的感觉,并与跑步做区分。
本申请对上述各个计算过程的先后顺序不加以限定。
在一些实施例中,开发人员可以对上述实施例提及的战术道具、装备对第一声音的到达声音大小的影响系数进行配置。示意性的参考图12,图12示出了头盔耳机对第一声音的到达声音大小影响系数的配置界面1200。该配置界面1200上包括增大声波幅度系数1201,声音大小影响系数1202和声音最大距离影响系数1203三个配置项,增大声波幅度系数1201配置项用于配置当第一虚拟角色佩戴头盔耳机对第一声音的声波振幅的影响系数,声音大小影响系数配置项1202用于配置当第一虚拟角色佩戴头盔耳机时对第一声音的到达声音大小的影响系数,声音最大距离影响系数1203配置项用于配置当第一虚拟角色带着头盔耳机时对第一声音可以传播的最大距离的影响系数。示例性的,声音大小影响系数1202和声音最大距离影响系数1203配置项都是1,当第一虚拟角色佩戴头盔耳机时对第一声音的到达声音大小和第一声音最大距离没有影响。
图13示出了消声器对第一声音的到达声音大小影响系数的配置界面1300。该配置界面1300上包括消声器声音大小影响系数1301和消声器声音最大距离影响系数1302两个配置项,消声器声音大小影响系数1301配置项用于配置当第一虚拟角色佩戴消声器时对第一声音的到达声音大小的影响系数,消声器声音最大距离影响系数1302配置项用于配置当第一虚拟角色佩戴消声器时对第一声音可以传播的最大距离的影响系数。示例性的,消声器声音大小影响系数1301为1,当第一虚拟角色佩戴消声器时对第一声音的到达声音大小没有影响。
图14示出了踩踏材质对第一声音的到达声音大小影响系数的配置界面1400。该配置界面1400上包括对于大理石金属材质1410,声音大小影响系数1401和声音最大距离影响系数1402两个配置项,声音大小影响系数1401配置项用于配置当第一虚拟角色踩踏大理石金属材质的地面时对第一声音的到达声音大小的影响系数,声音最大距离影响系数1402配置项用于配置当第一虚拟角色踩踏大理石金属材质的地面时对第一声音可以传播的最大距离的影响系数。示例性的,声音大小的影响系数1401和声音最大距离的影响系数1402 均为1,当第一虚拟角色踩踏大理石金属材质的地面时对第一声音的到达声音大小和第一声音最大距离没有影响。
在一些实施例中,开发人员可以对上述实施例提及的不同声音类型对第一声音的影响系数进行配置。示意性的参考图15,图15示出了不同声音类型和基础配置参数的配置界面1500。该配置界面1500上包括冲刺1501、爬1502、蹲走1503、静步走1504、静步爬1505、静步蹲走1506、单发开火1507、连发开火1508、NPC喊叫1509、蹭草1510、受伤喊叫1511、濒死喊叫1512、爆炸1513和走1514不同的声音类型分别配置包括基础声音大小1520、声音图标显示时间1530、声音图标渐隐时间1540、图标索引1550、声音图标刷新间隔1560、声音最大有效范围1570、声音波形抖动频率1580和不透明度曲线1590八个可配置参数。
在一些实施例中,开发人员可以对上述实施例提及的不同枪械类型对应的配置参数进行配置。示意性的参考图16,图16示出了不同枪械类型对应的配置参数的配置界面1600,是在开火类型的基础上做加减法进行变化的。该配置界面上包括单发开火基础参数的配置页面1600以及在单发开火的基础上手枪的参数配置页面1601和栓动步枪的参数配置页面1602。单发开火的配置页面包括基础声音大小1620、声音图标显示时间1630、声音图标渐隐时间1640、图标索引1650、声音图标刷新间隔1660、声音最大有效范围1670和声音波形抖动频率1680。手枪的配置页面声音最大有效范围1670配置项-100米,手枪比单发开火的声音最大有效范围1670配置项200米小100米,手枪的声音最大有效范围1670配置项为100米,而基础声音大小1620、声音图标显示时间1630、声音图标渐隐时间1640、声音图标刷新间隔1660和声音波形抖动频率1680与单发开火相同。栓动步枪的配置页面基础声音大小1620配置项为20、声音图标渐隐时间1640配置项为0.3,栓动步枪比单发开火的基础声音大小1620配置项大20,声音图标渐隐时间1640配置项长0.3,栓动步枪的声音最大有效范围1670、声音图标显示时间1630、声音图标刷新间隔1660和声音波形抖动频率1680与单发开火相同。
在一些实施例中,开发人员可以对上述实施例提及的通用配置参数进行配置。示意性的参考图17,图17示出了通用配置参数的配置界面1700,该配置界面1700上包括最多声音波纹显示数量1710,Icon显示角度阈值1720,上角阈值1730,下角阈值1740,战火声音图标颜色1750,人物声音图标颜1760和声音波纹高度与声音大小的映射曲线1770。Icon显示角度阈值1720配置项为90°,默认显示有效声音源在第一虚拟角色摄像朝向向左右旋转90°覆盖范围内的声音图标;声音波纹高度与声音大小的映射曲线1770,横轴为声音强度,数轴为波形高度,可以将第一声音的到达声音大小转化为第一声音的映射曲线。综上所述,本实施例提供的方法,通过在第一虚拟角色的周围环境中存在第一声源的情况下,基于罗盘信息中的第一方位刻度显示第一声音指示器,能够通过第一声音指示器同时指示第一声源对应的第一水平方位和第一垂直方位,使得用户仅依靠视觉表现即可准确判断第一声源的空间位置,在无需外放声音或使用耳机的听觉受限场景下,也能够针对声源获取足够多的有效空间信息。
本实施例提供的方法,还可以根据声音指示器及其下方的声波幅度谱的幅度,抖动频率,持续时间对第一声源的属性进行进一步的区分;根据音源角度与俯仰角进行比较确定第一声源在虚拟世界中的垂直方位,提高了对声音大小,声音频率以及声音类型方面的提示效果。
图18示出了本申请一个示例性实施例提供的虚拟世界中的声音提示方法的流程图。该方法可以由图1所示的第一终端120或第二终端160执行,第一终端120或第二终端160可以统称为终端,该方法包括以下步骤:
步骤1802:获取第一声音的参数;
终端获取第一声源发出的第一声音的参数。或者说,终端获取第一声源发出的第一声音的声音事件的参数。
步骤1804:辨别第一声音的声音类型;
根据终端获取的第一声音的声音参数确定第一声音的声音类型,并根据不同的声音类型确定对应的第一声音指示器承载的第二视觉表现。
在一些实施例中,第二视觉表现为颜色,根据终端获取的第一声音的声音参数判断第一声音为虚拟角色的脚步声/NPC呐喊声,则用白色来表示第一声音指示器;若判断第一声音为枪声/爆炸声,则采用红色来表示第一声音指示器。
在一些实施例中,第二视觉表现为图标样式,则根据终端获取的第一声音的声音参数判断第一声音的类型,并采用第一声音对应的不同图标样式来表示第一声音指示器。例如,判断第一声音为虚拟角色的脚步声,则用脚印图标样式来表示第一声音指示器;若判断第一声音为枪声,则采用枪械图标样式来表示第一声音指示器。
步骤1806a:根据第一声音的类型配置的第一声源与第一虚拟角色之间的距离和第一声源的原始声音大小计算到达声音大小;
第一声音的声音事件中携带有第一声源的三维坐标。终端通过将第一声源的三维坐标和第一虚拟角色的三维坐标进行距离计算,能够计算出第一声源与第一虚拟角色之间的距离。
由于声音大小会随着距离传播而衰减,因此第一声源与第一虚拟角色之间的距离越长,第一声音的声音大小越小;第一声源与第一虚拟角色之间的距离越短,第一声音的声音大小越大。
第一声音的声音事件中还携带有第一声源的原始声音大小,终端通过将距离作为一个影响参数,对第一声音的原始声音大小进行衰减。
步骤1806b:根据战术道具、装备等影响参数计算到达声音大小;
终端还根据战术道具及装备等影响参数确定对应的影响系数,进而计算第一声音最终到达声音大小。
示例性的,战术道具及装备(可统称为装备)包括:
·第一虚拟角色佩戴的装备;
第一虚拟角色佩戴的与声音大小有关的装备包括:不同类型的头盔以及耳机中的至少一种。装备类型以及装备的佩戴情况,均会影响第一声音的声音大小。
·在第一声源为第二虚拟角色的情况下,第二虚拟角色佩戴的装备;
第二虚拟角色佩戴的与声音大小有关的装备包括:不同类型的枪械,不同的弹药类型,消声器中的至少一种。装备类型以及装备的佩戴情况,均会影响第一声音的声音大小。
·第一声源的材质或第一声源接触到的材质。
比如,第一虚拟角色的鞋子与不同地面触碰的声音会影响声音的大小,第一虚拟角色的不同材质的鞋子与同一种地面触碰的声音会影响声音的大小。
示意性的,到达声音大小=(原始声音大小*原始声音大小的影响系数)*(1-声音距离/(声音最大有效距离*最大距离的影响系数))。
其中,原始声音大小为第一声音在第一声源处发出的声音大小。示意性的,原始声音大小的影响系数与上述影响参数有关,通常由设计师设置为经验值。声音最大距离下的影响系数用于指示声音衰减特性,与上述影响参数有关,通常由设计师设置为经验值。
比如:假设第一虚拟角色带着隔绝声音的头盔,听到了从75m发出的消音器的枪声,枪声的原始声音大小为100,最大有效距离为150m。消音器对原始声音大小的影响系数为1,对最大距离下的影响系数是0.5;头盔对原始声音大小的影响系数为0.5,对最大距离的影响系数为0.5。
到达声音大小=(100*1*0.5)*(1-75/(150*0.5*0.5))=50*-1=-50=负数归零=听不到声音,所以不在第一声音指示器上显示波纹。
又比如:假设第一虚拟角色带着隔绝声音的头盔,听到了从30m发出的消音器的枪声,枪声的原始声音大小为100,最大有效距离为150m。
到达声音大小=(100*0.5)*(1-30/(150*0.5*0.5))=50*0.2=25。
需要说明的是,对于步骤1806a和步骤1806b可以是同时计算的,也可以是先根据第一声源与第一虚拟角色之间的距离和第一声源的原始声音大小计算出受战术道具、装备影响之前的声音大小作为中间值,再计算受战术道具和装备影响之后的最终到达声音大小。
步骤1808:判断第一声源的水平方位是否在罗盘信息显示的方位刻度序列对应的可见方位范围内;
罗盘信息包括方位刻度序列,该方位刻度序列中的方位刻度用于指示第一虚拟角色在虚拟世界中所面向的水平方位。示例性的,将第一虚拟角色的视角在虚拟世界中能够观察到的各个水平方位的方位刻度显示在方位刻度序列中,当前视角下未能观察到的水平方位的方位刻度可以不显示在方位刻度序列中。或者,以位于第一虚拟角色正前方的水平方位为中心的预设范围内的方向刻度,显示在方位刻度序列中。
若第一声源的水平方位不处于罗盘信息显示的方位刻度序列对应的可见方位范围内,执行步骤1810,以第二声音指示器的形式表现声音;若第一声源的水平方位处于罗盘信息显示的方位刻度序列对应的可见方位范围内,执行步骤1812,根据俯仰角计算是否处于第一虚拟角色的上/下方。
步骤1810:用第二声音指示器的形式表现声音;
若判断第一声源的水平方位不处于罗盘信息显示的方位刻度序列对应的可见方位范围内,则基于方位刻度序列中与第二水平方位最近的边缘方位刻度显示第二声音指示器,并将该声音确定为第二声音或第二声源,用第二声音指示器来指示。
步骤1812:根据俯仰角计算判断第一声源是否处于第一虚拟角色的上/下方;
根据终端获取的第一声源与第一虚拟角色在虚拟环境中的位置坐标,计算第一声源相对于第一虚拟角色的俯仰角;基于俯仰角的取值范围,确定第一声源的垂直方位。
在第一声源处于第一虚拟角色上/下方时,执行步骤1814,以上方或者下方第一声音指示器的形式表现声音,即在第一声源处于第一虚拟角色上方时,显示上方的第一声音指示器的形式来表现上方信息;在第一声源处于第一虚拟角色下方时,显示下方的第一声音指示器的形式来表现下方信息;在第一声源不处于第一虚拟角色上/下方时,则执行步骤1816,以中间第一声音指示器的形式表现声音,即第一声源处于第一虚拟角色的中间,显示中间的第一声音指示器的形式来表现中间信息。
示例性的,如图11所示,在第一声源相对于第一虚拟角色的俯仰角处于-17°到17°或者163°到180°或者-163°到-180°的范围内,确定第一角色处于第一虚拟角色的中间;在第一声源相对于第一虚拟角色的俯仰角处于17°到163°的范围内,确定第一角色处于第一虚拟角色的上方;在第一声源相对于第一虚拟角色的俯仰角处于-17°到-163°的范围内,确定第一角色处于第一虚拟角色的下方。
步骤1814:以上方或者下方第一声音指示器的形式表现声音;
步骤1816:以中间第一声音指示器的形式表现声音;
步骤1818:判断第一声音是否为枪声;
根据终端获取的第一声音的声音参数确定第一声音是否为枪声,在是枪械声音的情况下,执行步骤1820,根据枪械声音类型对第一声音进行区分表现;在判断第一声音不是枪声的情况下,执行步骤1822,根据人物及其他声音类型对第一声音进行区分表现。
步骤1820:根据枪械声音的类型对第一声音进行区分表现;
示例性的终端根据第一声音的声音参数确定第一声音为枪声,会将第一声音的第一声音指示器确认为红色或者确认为枪械的图标样式,再根据第一声音参数确定第一声音指示器的声波幅度、声波振动频率及声波持续时间。
步骤1822:根据虚拟角色及其他声音的类型对第一声音进行区分表现。
示例性的终端根据第一声音的声音参数确定第一声源为虚拟角色,会将第一声音的第一声音指示器确认为白色或者确认为脚印图标样式或者人头图标样式,再根据第一声音参数确定第一声音指示器的声波幅度、声波振动频率及声波持续时间。
综上所述,本实施例提供的方法,通过终端获取第一声音的声音参数对第一声音的类型进行辨别,在根据第一声源与第一虚拟角色之间的距离和第一声源的原始声音大小对第一声音的到达声音大小进行计算,通过判断第一声音是否在罗盘信息显示的方位刻度范围,来确定第一声音的水平方位和垂直方位,从而提高了对声音大小,声音频率以及声音类型方面的提示效果。
图19示出了本申请一个示例性实施例提供的虚拟世界中的声音提示装置的结构示意图。该装置可以通过软件、硬件或者两者的结合实现成为计算机设备的全部或一部分,该装置1900包括:
显示模块1901,用于显示第一虚拟角色的视角画面,所述视角画面上显示有罗盘信息,所述罗盘信息中包括方位刻度序列,所述方位刻度序列中的方位刻度用于指示所述第一虚拟角色在虚拟世界中面向的水平方位;
控制模块1902,用于控制所述第一虚拟角色在所述虚拟世界中活动;
所述显示模块1901,用于在所述第一虚拟角色在所述虚拟世界中活动的过程中,若在所述第一虚拟角色的周围环境中存在第一声源,基于所述方位刻度序列中的第一方位刻度显示第一声音指示器,所述第一声音指示器用于指示所述第一声源对应的水平方位和垂直方位。
在本实施例的一个可选设计中,所述显示模块1901,用于基于所述方位刻度序列中的所述第一方位刻度显示具有视觉表现的第一声音指示器,所述第一声音指示器的中心位置与所述第一方位刻度对齐,所述第一方位刻度用于指示所述第一声源的水平方位,所述第一声音指示器的视觉表现用于指示所述第一声源的垂直方位。
在本实施例的一个可选设计中,所述第一声音指示器的视觉表现包括第一视觉表现,所述第一视觉表现用于指示所述第一声源的垂直方位,所述第一视觉表现包括如下至少一种:所述第一声音指示器的形状;所述第一声音指示器中的垂直方位刻度;所述第一声音指示器中的箭头;所述第一声音指示器中的文字提示。
在本实施例的一个可选设计中,所述第一视觉表现包括:n种第一视觉表现,所述n种第一视觉表现与n种垂直方位一一对应,n为大于1的正整数;
所述显示模块1901,用于基于所述罗盘信息中的所述第一方位刻度显示具有第i种第一视觉表现的第一声音指示器,所述第i种第一视觉表现用于指示所述第一声源对应第i种垂直方位,i为不大于n的正整数。
在本实施例的一个可选设计中,所述垂直方位包括:上部方位、中部方位和下部方位;
所述显示模块1901,用于基于所述方位刻度序列中的所述第一方位刻度显示呈向上形状的第一声音指示器,所述第一声音指示器用于指示所述第一声源的垂直方位为所述上部方位;或,基于所述方位刻度序列中的所述第一方位刻度显示呈上下对称形状的第一声音指示器,所述第一声音指示器用于指示所述第一声源的垂直方位为所述中部方位;或,基于所述方位刻度序列中的所述第一方位刻度显示呈向下形状的第一声音指示器,所述第一声音指示器用于指示所述第一声源的垂直方位为所述下部方位。
在本实施例的一个可选设计中,所述显示模块1901,用于基于所述方位刻度序列中的所述第一方位刻度显示具有所述垂直方位刻度的第一声音指示器,所述垂直方位刻度用于指示所述第一声源相对于所述第一虚拟角色的俯仰角;或,基于所述方位刻度序列中的所述第一方位刻度显示具有所述箭头的第一声音指示器,所述箭头的箭头方向用于指示所述第一声源的垂直方位;或,基于所述方位刻度序列中的所述第一方位刻度显示具有所述文字提示的第一声音指示器,所述文字提示用于指提示所述第一声源的垂直方位。
在本实施例的一个可选设计中,所述第一声源用于发出第一声音,所述第一声音指示器的视觉表现还包括其他视觉表现,其中,所述第一视觉表现和所述其它视觉表现是不同类型的视觉表现,所述其它视觉表现包括如下视觉表现中的至少一种:
用于指示所述第一声音的声音类型的第二视觉表现;
用于指示所述第一声音的声音大小的第三视觉表现;
用于指示所述第一声音的声音距离的第四视觉表现;
用于指示所述第一声音的动作频率的第五视觉表现。
在本实施例的一个可选设计中,所述装置还包括:
确定模块1903,用于根据所述第一声音的声音参数,确定所述第一声音指示器的视觉表现。
在本实施例的一个可选设计中,所述声音参数包括声音类型,所述确定模块1903,用于:
根据所述第一声音的声音类型,确定所述第一声音指示器的所述第二视觉表现。
在本实施例的一个可选设计中,所述第二视觉表现包括所述第一声音指示器的颜色或所述第一声音指示器的图标样式。
在本实施例的一个可选设计中,所述第一声音指示器采用声波幅度谱表示,所述第三视觉表现包括所述第一声波幅度谱的幅度大小,所述声音参数包括声音大小,所述确定模块1903,用于根据所述第一声音在所述第一虚拟角色处的到达声音大小,确定所述第一声波幅度谱的幅度大小。
在本实施例的一个可选设计中,所述确定模块1903,用于根据所述第一声音的原始声音大小和影响参数,确定所述第一声音的所述到达声音大小,所述影响参数包括如下参数中的至少一种:
所述第一声源与所述第一虚拟角色之间的距离;
所述第一虚拟角色佩戴的装备;
在所述第一声源为第二虚拟角色的情况下,所述第二虚拟角色佩戴的装备;
所述第一声源的材质或所述第一声源接触到的材质。
在本实施例的一个可选设计中,针对同一个所述原始声音大小,在所述第一虚拟角色佩戴有耳机的情况确定的所述到达声音大小大于在所述第一虚拟角色未佩戴有耳机的情况确定的所述到达声音大小;或,针对同一个所述原始声音大小,在所述第一虚拟角色佩戴有头盔的情况确定的所述到达声音大小小于在所述第一虚拟角色未佩戴有头盔的情况确定的所述到达声音大小;或,针对同一个所述原始声音大小,在所述第二虚拟角色佩戴有消声器的情况确定的所述到达声音大小小于在所述第一虚拟角色未佩戴有消声器的情况确定的所述到达声音大小。
在本实施例的一个可选设计中,所述第四视觉表现包括所述第一声音指示器的开始显示时间,所述声音参数包括声音传播速度,所述确定模块1903,用于基于所述第一声源和所述第一虚拟角色之间的声音传播速度,确定所述第一声音指示器的开始显示时间,所述开始显示时间晚于所述第一声音的产生时间。
在本实施例的一个可选设计中,所述第一声音指示器采用声波幅度谱表示,所述第五视觉表现包括所述声波幅度谱的抖动频率,所述声音参数包括声音传播速度,所述确定模块1903,用于根据所述第一声源发出所述第一声音时的动作频率,确定所述声波幅度谱的抖动频率。
在本实施例的一个可选设计中,所述第一声音指示器的视觉表现包括:形状、图案、颜色、纹理、文字、动画效果、开始显示时间、持续显示时间、消隐时间中的至少一种。
在本实施例的一个可选设计中,所述罗盘信息中的所述方位刻度序列对应所述第一虚拟角色的可见方位范围,所述显示模块1901,用于若所述第一虚拟角色的周围环境中存在第二声源且所述第二声源的水平方位位于所述可见方位范围之外,基于所述方位刻度序列中与所述第二声源的水平方位最近的边缘方位刻度显示第二声音指示器,所述第二声音指示器用于表示沿所述边缘方位刻度所指示的水平方位上存在所述第二声源。
在本实施例的一个可选设计中,所述显示模块1901,用于在所述第一虚拟角色进入失聪状态的情况下,取消显示所述第一声音提示器。
在本实施例的一个可选设计中,所述确定模块1903,用于在所述第一声源发出至少两个声音且所述至少两个声音的生成时间差小于阈值的情况下,将所述至少两个声音中音量最大的声音确定为所述第一声音。
本申请还提供了一种计算机设备,该计算机设备包括处理器和存储器,存储器中存储有至少一条指令,至少一条指令由处理器加载并执行以实现上述各个方法实施例提供的虚拟世界的声音提示方法。需要说明的是,该计算机设备可以是如下图20所提供的终端。
图20示出了本申请一个示例性实施例提供的计算机设备2000的结构框图。该计算机设备2000可以是:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。计算机设备2000还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,计算机设备2000包括有:处理器2001和存储器2002。
处理器2001可以包括一个或多个处理核心,例如4核心处理器、8核心处理器等。处理器2001可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器2001也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器2001可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器2001还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器2002可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器2002还可包括高速随机存取存储器,以及非易失性存储器,例如一个或 多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器2002中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器2001所执行以实现本申请中方法实施例提供的虚拟世界的声音提示方法。
在一些实施例中,计算机设备2000还可选包括有:外围设备接口2003和至少一个外围设备。处理器2001、存储器2002和外围设备接口2003之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口2003相连。具体地,外围设备包括:射频电路2004、显示屏2005、摄像头2006、音频电路2007、定位组件2008和电源2009中的至少一种。
在一些实施例中,计算机设备2000还包括有一个或多个传感器2010。该一个或多个传感器2010包括但不限于:加速度传感器2011、陀螺仪传感器2012、压力传感器2013、指纹传感器2014、光学传感器2015以及接近传感器2016。
本领域技术人员可以理解,图20中示出的结构并不构成对计算机设备2000的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
根据本申请的另一方面,还提供了一种计算机存储介质,计算机可读存储介质中存储有至少一条程序代码,程序代码由处理器加载并执行以实现如上述的虚拟世界中的声音提示方法。
根据本申请的另一方面,还提供了一种计算机程序产品或计算机程序,上述计算机程序产品或计算机程序包括计算机指令,上述计算机指令存储在计算机可读存储介质中。计算机设备的处理器从上述计算机可读存储介质读取上述计算机指令,上述处理器执行上述计算机指令,使得上述计算机设备执行如上述的虚拟世界中的声音提示方法。
应当理解的是,在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (22)

  1. 一种虚拟世界中的声音提示方法,所述方法由终端执行,所述方法包括:
    显示第一虚拟角色的视角画面,所述视角画面上显示有罗盘信息,所述罗盘信息中包括方位刻度序列,所述方位刻度序列中的方位刻度用于指示所述第一虚拟角色在虚拟世界中面向的水平方位;
    控制所述第一虚拟角色在所述虚拟世界中活动;
    在所述第一虚拟角色在所述虚拟世界中活动的过程中,若在所述第一虚拟角色的周围环境中存在第一声源,基于所述方位刻度序列中的第一方位刻度显示第一声音指示器,所述第一声音指示器用于指示所述第一声源对应的水平方位和垂直方位。
  2. 根据权利要求1所述的方法,所述基于所述方位刻度序列中的第一方位刻度显示第一声音指示器,包括:
    基于所述方位刻度序列中的所述第一方位刻度显示具有视觉表现的第一声音指示器,所述第一声音指示器的中心位置与所述第一方位刻度对齐,所述第一方位刻度用于指示所述第一声源的水平方位,所述第一声音指示器的视觉表现用于指示所述第一声源的垂直方位。
  3. 根据权利要求2所述的方法,所述第一声音指示器的视觉表现包括第一视觉表现,所述第一视觉表现用于指示所述第一声源的垂直方位,所述第一视觉表现包括如下至少一种:
    所述第一声音指示器的形状;
    所述第一声音指示器中的垂直方位刻度;
    所述第一声音指示器中的箭头;
    所述第一声音指示器中的文字提示。
  4. 根据权利要求3所述的方法,所述垂直方位包括:上部方位、中部方位和下部方位;
    所述基于所述方位刻度序列中的所述第一方位刻度显示具有视觉表现的第一声音指示器,包括:
    基于所述方位刻度序列中的所述第一方位刻度显示呈向上形状的第一声音指示器,所述第一声音指示器用于指示所述第一声源的垂直方位为所述上部方位;
    或,
    基于所述方位刻度序列中的所述第一方位刻度显示呈上下对称形状的第一声音指示器,所述第一声音指示器用于指示所述第一声源的垂直方位为所述中部方位;
    或,
    基于所述方位刻度序列中的所述第一方位刻度显示呈向下形状的第一声音指示器,所述第一声音指示器用于指示所述第一声源的垂直方位为所述下部方位。
  5. 根据权利要求3所述的方法,所述基于所述方位刻度序列中的所述第一方位刻度显示具有视觉表现的第一声音指示器,包括:
    基于所述方位刻度序列中的所述第一方位刻度显示具有所述垂直方位刻度的第一声音指示器,所述垂直方位刻度用于指示所述第一声源相对于所述第一虚拟角色的俯仰角;
    或,
    基于所述方位刻度序列中的所述第一方位刻度显示具有所述箭头的第一声音指示器,所述箭头的箭头方向用于指示所述第一声源的垂直方位;
    或,
    基于所述方位刻度序列中的所述第一方位刻度显示具有所述文字提示的第一声音指示器,所述文字提示用于提示所述第一声源的垂直方位。
  6. 根据权利要求3所述的方法,所述第一声源用于发出第一声音,所述第一声音指示器的视觉表现还包括其他视觉表现,
    其中,所述第一视觉表现和所述其它视觉表现是不同类型的视觉表现,所述其它视觉表现包括如下视觉表现中的至少一种:
    用于指示所述第一声音的声音类型的第二视觉表现;
    用于指示所述第一声音的声音大小的第三视觉表现;
    用于指示所述第一声音的声音距离的第四视觉表现;
    用于指示所述第一声音的动作频率的第五视觉表现。
  7. 根据权利要求6所述的方法,所述方法还包括:
    根据所述第一声音的声音参数,确定所述第一声音指示器的视觉表现。
  8. 根据权利要求7所述的方法,所述声音参数包括声音类型,所述根据所述第一声音的声音参数,确定所述第一声音指示器的视觉表现,包括:
    根据所述第一声音的声音类型,确定所述第一声音指示器的所述第二视觉表现。
  9. 根据权利要求8所述的方法,所述第二视觉表现包括所述第一声音指示器的颜色或者所述第一声音指示器的图标样式。
  10. 根据权利要求7所述的方法,所述第一声音指示器采用声波幅度谱表示,所述第三视觉表现包括所述第一声波幅度谱的幅度大小,所述声音参数包括声音大小,所述根据所述第一声音的声音参数,确定所述第一声音指示器的视觉表现,包括:
    根据所述第一声音在所述第一虚拟角色处的到达声音大小,确定所述第一声波幅度谱的幅度大小。
  11. 根据权利要求10所述的方法,所述方法还包括:
    根据所述第一声音的原始声音大小和影响参数,确定所述第一声音的所述到达声音大小,所述影响参数包括如下参数中的至少一种:
    所述第一声源与所述第一虚拟角色之间的距离;
    所述第一虚拟角色佩戴的装备;
    在所述第一声源为第二虚拟角色的情况下,所述第二虚拟角色佩戴的装备;
    所述第一声源的材质或所述第一声源接触到的材质。
  12. 根据权利要求11所述的方法,
    针对同一个所述原始声音大小,在所述第一虚拟角色佩戴有耳机的情况确定的所述到达声音大小大于在所述第一虚拟角色未佩戴有耳机的情况确定的所述到达声音大小;
    或,
    针对同一个所述原始声音大小,在所述第一虚拟角色佩戴有头盔的情况确定的所述到达声音大小小于在所述第一虚拟角色未佩戴有头盔的情况确定的所述到达声音大小;
    或,
    针对同一个所述原始声音大小,在所述第二虚拟角色佩戴有消声器的情况确定的所述到达声音大小小于在所述第一虚拟角色未佩戴有消声器的情况确定的所述到达声音大小。
  13. 根据权利要求7所述的方法,所述第四视觉表现包括所述第一声音指示器的开始显示时间,所述声音参数包括声音传播速度,所述根据所述第一声音的声音参数,确定所述第一声音指示器的视觉表现,包括:
    基于所述第一声源和所述第一虚拟角色之间的声音传播速度,确定所述第一声音指示器的开始显示时间,所述开始显示时间晚于所述第一声音的产生时间。
  14. 根据权利要求7所述的方法,所述第一声音指示器采用声波幅度谱表示,所述第五视觉表现包括所述声波幅度谱的抖动频率,所述声音参数包括所述第一声源的动作频率,所述根据所述第一声音的声音参数,确定所述第一声音指示器的视觉表现,包括:
    根据所述第一声源发出所述第一声音时的动作频率,确定所述声波幅度谱的抖动频率。
  15. 根据权利要求1至14任一所述的方法,所述第一声音指示器的视觉表现包括:形状、图案、颜色、纹理、文字、动画效果、开始显示时间、持续显示时间、消隐时间中的至少一种。
  16. 根据权利要求1至14任一所述的方法,所述罗盘信息中的所述方位刻度序列对应所述第一虚拟角色的可见方位范围,所述方法还包括:
    若所述第一虚拟角色的周围环境中存在第二声源且所述第二声源的水平方位位于所述可见方位范围之外,基于所述方位刻度序列中与所述第二声源的水平方位最近的边缘方位刻度显示第二声音指示器,所述第二声音指示器用于表示沿所述边缘方位刻度所指示的水平方位上存在所述第二声源。
  17. 根据权利要求1至14任一所述的方法,所述方法还包括:
    在所述第一虚拟角色进入失聪状态的情况下,取消显示所述第一声音提示器。
  18. 根据权利要求1至14任一所述的方法,所述方法还包括:
    在所述第一声源发出至少两个声音且所述至少两个声音的生成时间差小于阈值的情况下,将所述至少两个声音中音量最大的声音确定为所述第一声音。
  19. 一种虚拟世界中的声音提示装置,所述装置部署在终端上,所述装置包括:
    显示模块,用于显示第一虚拟角色的视角画面,所述视角画面上显示有罗盘信息,所述罗盘信息中包括方位刻度序列,所述方位刻度序列中的方位刻度用于指示所述第一虚拟角色在虚拟世界中面向的水平方位;
    控制模块,用于控制所述第一虚拟角色在所述虚拟世界中活动;
    所述显示模块,还用于在所述第一虚拟角色在所述虚拟世界中活动的过程中,若在所述第一虚拟角色的周围环境中存在第一声源,基于所述方位刻度序列中的第一方位刻度显示第一声音指示器,所述第一声音指示器用于指示所述第一声源对应的水平方位和垂直方位。
  20. 一种计算机设备,所述计算机设备包括:处理器和存储器,所述存储器中存储有至少一段程序,所述至少一段程序由所述处理器加载并执行以实现如权利要求1至18中任一项所述的虚拟世界中的声音提示方法。
  21. 一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一段程序,所述至少一段程序由处理器加载并执行以实现如权利要求1至18中任一项所述的虚拟世界中的声音提示方法。
  22. 一种计算机程序产品,当所述计算机程序产品被执行时,使得所述处理器实现如权利要求1至18中任一项所述的虚拟世界中的声音提示方法。
PCT/CN2022/102593 2021-08-05 2022-06-30 虚拟世界中的声音提示方法、装置、设备及存储介质 WO2023011063A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/322,031 US20230285859A1 (en) 2021-08-05 2023-05-23 Virtual world sound-prompting method, apparatus, device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110898406.6 2021-08-05
CN202110898406.6A CN115703011A (zh) 2021-08-05 2021-08-05 虚拟世界中的声音提示方法、装置、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/322,031 Continuation US20230285859A1 (en) 2021-08-05 2023-05-23 Virtual world sound-prompting method, apparatus, device and storage medium

Publications (1)

Publication Number Publication Date
WO2023011063A1 true WO2023011063A1 (zh) 2023-02-09

Family

ID=85155163

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/102593 WO2023011063A1 (zh) 2021-08-05 2022-06-30 虚拟世界中的声音提示方法、装置、设备及存储介质

Country Status (3)

Country Link
US (1) US20230285859A1 (zh)
CN (1) CN115703011A (zh)
WO (1) WO2023011063A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107890673A (zh) * 2017-09-30 2018-04-10 网易(杭州)网络有限公司 补偿声音信息的视觉显示方法及装置、存储介质、设备
CN108854069A (zh) * 2018-05-29 2018-11-23 腾讯科技(深圳)有限公司 音源确定方法和装置、存储介质及电子装置
US20190076739A1 (en) * 2017-09-12 2019-03-14 Netease (Hangzhou) Network Co.,Ltd. Information processing method, apparatus and computer readable storage medium
TW201931354A (zh) * 2018-01-05 2019-08-01 美律實業股份有限公司 用於音頻成像的可穿戴式電子裝置及其操作方法
CN113559504A (zh) * 2021-04-28 2021-10-29 网易(杭州)网络有限公司 信息处理方法、装置、存储介质及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190076739A1 (en) * 2017-09-12 2019-03-14 Netease (Hangzhou) Network Co.,Ltd. Information processing method, apparatus and computer readable storage medium
CN107890673A (zh) * 2017-09-30 2018-04-10 网易(杭州)网络有限公司 补偿声音信息的视觉显示方法及装置、存储介质、设备
TW201931354A (zh) * 2018-01-05 2019-08-01 美律實業股份有限公司 用於音頻成像的可穿戴式電子裝置及其操作方法
CN108854069A (zh) * 2018-05-29 2018-11-23 腾讯科技(深圳)有限公司 音源确定方法和装置、存储介质及电子装置
CN113559504A (zh) * 2021-04-28 2021-10-29 网易(杭州)网络有限公司 信息处理方法、装置、存储介质及电子设备

Also Published As

Publication number Publication date
CN115703011A (zh) 2023-02-17
US20230285859A1 (en) 2023-09-14

Similar Documents

Publication Publication Date Title
CN111481932B (zh) 虚拟对象的控制方法、装置、设备和存储介质
JP7476235B2 (ja) 仮想オブジェクトの制御方法、装置、デバイス及びコンピュータプログラム
CN110548288B (zh) 虚拟对象的受击提示方法、装置、终端及存储介质
CN110585710B (zh) 互动道具控制方法、装置、终端及存储介质
CN110478895B (zh) 虚拟物品的控制方法、装置、终端及存储介质
CN110585712A (zh) 在虚拟环境中投掷虚拟爆炸物的方法、装置、终端及介质
CN112076467B (zh) 控制虚拟对象使用虚拟道具的方法、装置、终端及介质
CN111744186B (zh) 虚拟对象的控制方法、装置、设备及存储介质
JP2022539288A (ja) 仮想オブジェクトの制御方法、装置、機器及びコンピュータプログラム
JP2022539289A (ja) 仮想オブジェクト照準方法、装置及びプログラム
US20230013014A1 (en) Method and apparatus for using virtual throwing prop, terminal, and storage medium
CN110585706B (zh) 互动道具控制方法、装置、终端及存储介质
CN110507990B (zh) 基于虚拟飞行器的互动方法、装置、终端及存储介质
US20220379214A1 (en) Method and apparatus for a control interface in a virtual environment
CN112076469A (zh) 虚拟对象的控制方法、装置、存储介质及计算机设备
CN113559504B (zh) 信息处理方法、装置、存储介质及电子设备
US20230072503A1 (en) Display method and apparatus for virtual vehicle, device, and storage medium
CN113041622A (zh) 虚拟环境中虚拟投掷物的投放方法、终端及存储介质
CN112044084A (zh) 虚拟环境中的虚拟道具控制方法、装置、存储介质及设备
CN111359206A (zh) 虚拟对象的控制方法、装置、终端和存储介质
CN113713383A (zh) 投掷道具控制方法、装置、计算机设备及存储介质
CN114130031A (zh) 虚拟道具的使用方法、装置、设备、介质及程序产品
CN117101139A (zh) 游戏中的信息处理方法、装置、存储介质及电子装置
CN112221135A (zh) 画面显示方法、装置、设备以及存储介质
WO2023011063A1 (zh) 虚拟世界中的声音提示方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22851778

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2301006963

Country of ref document: TH

WWE Wipo information: entry into national phase

Ref document number: 11202307122R

Country of ref document: SG

ENP Entry into the national phase

Ref document number: 2023571572

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE