WO2023011063A1 - 虚拟世界中的声音提示方法、装置、设备及存储介质 - Google Patents
虚拟世界中的声音提示方法、装置、设备及存储介质 Download PDFInfo
- Publication number
- WO2023011063A1 WO2023011063A1 PCT/CN2022/102593 CN2022102593W WO2023011063A1 WO 2023011063 A1 WO2023011063 A1 WO 2023011063A1 CN 2022102593 W CN2022102593 W CN 2022102593W WO 2023011063 A1 WO2023011063 A1 WO 2023011063A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sound
- indicator
- azimuth
- virtual character
- visual representation
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 70
- 230000000007 visual effect Effects 0.000 claims abstract description 158
- 230000008569 process Effects 0.000 claims abstract description 5
- 238000001228 spectrum Methods 0.000 claims description 40
- 230000000694 effects Effects 0.000 claims description 23
- 239000000463 material Substances 0.000 claims description 14
- 230000009471 action Effects 0.000 claims description 10
- 206010011878 Deafness Diseases 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 8
- 230000014509 gene expression Effects 0.000 abstract description 5
- 230000003993 interaction Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 20
- 238000013461 design Methods 0.000 description 18
- 238000010304 firing Methods 0.000 description 18
- 230000009184 walking Effects 0.000 description 14
- 238000012545 processing Methods 0.000 description 8
- 230000009183 running Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 7
- 238000004880 explosion Methods 0.000 description 7
- 230000002093 peripheral effect Effects 0.000 description 6
- 230000009194 climbing Effects 0.000 description 4
- 239000004579 marble Substances 0.000 description 4
- 244000025254 Cannabis sativa Species 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 239000002184 metal Substances 0.000 description 3
- 230000009192 sprinting Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000009193 crawling Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003584 silencer Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000009187 flying Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 239000007769 metal material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
- A63F13/537—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
- A63F13/5378—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/54—Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
- A63F13/533—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
- A63F13/537—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
- A63F13/537—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
- A63F13/5375—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
Definitions
- the embodiments of the present application relate to the field of human-computer interaction, and in particular to a sound prompt method, device, device and storage medium in a virtual world.
- the user can operate the game characters in the game program for competitive confrontation.
- the game program provides a virtual world, and game characters are virtual characters located in the virtual world.
- the game screen and mini-map controls are displayed on the terminal.
- the game screen is a screen obtained by observing the virtual world from the perspective of the current game character
- the mini-map control is a control for displaying an overhead map of the virtual world.
- the sound icon will be displayed on the mini-map control, so as to prompt the gunshot on the mini-map space according to the sound icon , footsteps, mufflers, etc.
- point A on the mini-map control displays a sound icon and the sound icon is a pair of footprints, which indicates that there are other game characters walking in the virtual world corresponding to point A on the mini-map control.
- the above-mentioned sound icon cannot provide the specific position of the sound source in the virtual world, and the effective information it can provide is limited. It is beneficial for game characters to carry out deeper confrontation in the virtual world.
- the present application provides a sound prompting method, device, device and storage medium in a virtual world, which can simultaneously indicate the horizontal and vertical directions of a sound source in the virtual world through a sound indicator. Described technical scheme is as follows:
- a method for sound prompting in a virtual world comprising:
- compass information is displayed on the perspective picture, the compass information includes a sequence of azimuth scales, and the azimuth scales in the sequence of azimuth scales are used to indicate that the first avatar character is in the virtual world The horizontal orientation of the middle face;
- the first sound indicator is used to indicate the horizontal orientation and vertical orientation corresponding to the first sound source.
- a sound prompting device in a virtual world comprising:
- the display module is used to display the angle of view picture of the first virtual character, compass information is displayed on the angle of view picture, and the compass information includes an azimuth scale sequence, and the azimuth scale in the azimuth scale sequence is used to indicate the first The horizontal orientation of the virtual character facing in the virtual world;
- control module configured to control the activities of the first virtual character in the virtual world
- the display module is further configured to, when the first virtual character is active in the virtual world, if there is a first sound source in the surrounding environment of the first virtual character, based on the azimuth scale sequence
- the first azimuth scale in displays a first sound indicator, and the first sound indicator is used to indicate the horizontal azimuth and vertical azimuth corresponding to the first sound source.
- a computer device includes: a processor and a memory, at least one program is stored in the memory, and the at least one program is loaded and executed by the processor to Realize the sound prompting method in the virtual world as described in the foregoing aspect.
- a computer-readable storage medium is provided. At least one program is stored in the computer-readable storage medium, and the at least one program is loaded and executed by a processor to implement the above-mentioned aspects.
- a computer program product is provided.
- the processor is enabled to implement the method for sound prompting in a virtual world as described in the foregoing aspect.
- compass information is displayed on the viewing angle screen where the first avatar is located, and the compass information includes the azimuth scale sequence, so it can be based on the azimuth scale sequence.
- the first azimuth scale displays the first sound indicator, which can simultaneously indicate the horizontal and vertical azimuths corresponding to the first sound source through the first sound indicator, so that the user can accurately judge the specific spatial position of the first sound source only by visual representation , in a hearing-limited scene without external sound or earphones, enough effective spatial information can also be obtained for the sound source, which is beneficial for the first virtual character to carry out a deeper confrontation in the virtual world.
- Fig. 1 is a structural block diagram of a computer system provided by an exemplary embodiment of the present application
- Fig. 2 is a flow chart of a sound prompting method in a virtual world provided by an exemplary embodiment of the present application
- Fig. 3 is a schematic interface diagram of a virtual environment screen provided by an exemplary embodiment of the present application.
- Fig. 4 is a schematic diagram of three sound prompts in the vertical direction provided by an exemplary embodiment of the present application.
- Fig. 5 is a flow chart of a sound prompting method in a virtual world provided by an exemplary embodiment of the present application
- Fig. 6 is a schematic diagram of an interface for prompting the sound above provided by an exemplary embodiment of the present application.
- Fig. 7 is a schematic diagram of an interface prompting two types of sounds provided by an exemplary embodiment of the present application.
- Fig. 8 is a schematic diagram of an interface for prompting sound with an icon style provided by an exemplary embodiment of the present application.
- Fig. 9 is a schematic diagram of prompting sounds at different distances provided by an exemplary embodiment of the present application.
- Fig. 10 is a schematic interface diagram of a second sound indicator provided by an exemplary embodiment of the present application.
- Fig. 11 is a schematic diagram of vertical sound confirmation provided by an exemplary embodiment of the present application.
- Fig. 12 is a configuration diagram of the influence of the helmet earphone on the sound prompt provided by an exemplary embodiment of the present application
- Fig. 13 is a configuration diagram of the influence of the muffler on the sound prompt provided by an exemplary embodiment of the present application
- Fig. 14 is a configuration diagram of the impact of trampling materials on sound prompts provided by an exemplary embodiment of the present application.
- Fig. 15 is a configuration diagram of influence coefficients of different sound types provided by an exemplary embodiment of the present application.
- Fig. 16 is a configuration diagram of different firearm type parameters provided by an exemplary embodiment of the present application.
- FIG. 17 is a configuration diagram of general configuration parameters of prompt sounds provided by an exemplary embodiment of the present application.
- Fig. 18 is a flow chart of a sound prompting method in a virtual world provided by an exemplary embodiment of the present application.
- Fig. 19 is a structural block diagram of a terminal provided by an exemplary embodiment of the present application.
- Fig. 20 is a schematic structural diagram of a server provided by an exemplary embodiment of the present application.
- first person shooting game First Person Shooting, referred to as FPS
- third person shooting game TPS
- the first-person shooting game is a shooting game played from the player's main perspective. Players no longer manipulate the virtual characters displayed on the screen to play the game like other game types, but experience the game from the immersive main perspective. the visual impact.
- third-person shooting games is that in first-person shooting games, only the protagonist’s field of vision is displayed on the screen.
- the game characters controlled by the player are visible on the game screen, which emphasizes more on the sense of action.
- Virtual world is the virtual world displayed (or provided) when the application program is running on the terminal.
- the virtual world may be a three-dimensional virtual world or a two-dimensional virtual world.
- the three-dimensional virtual world can be a simulated environment of the real world, a semi-simulated and semi-fictitious environment, or a purely fictitious environment.
- the following embodiments are illustrated by taking the virtual world as a three-dimensional virtual world as an example, but this is not limited thereto.
- the virtual world is also used for virtual scene battles between at least two virtual characters.
- the virtual scene is also used for fighting with virtual firearms between at least two virtual characters.
- Virtual character refers to the movable object in the virtual world.
- the movable object may be a simulated character or an anime character in the virtual world.
- the virtual object is a three-dimensional model created based on animation skeleton technology.
- Each virtual object has its own shape and volume in the three-dimensional virtual scene, and occupies a part of the space in the three-dimensional virtual scene.
- the virtual characters may be individuals in the virtual world that can independently make different sounds, including the first virtual character, the second virtual character, etc., which respectively represent independent individuals who make different sounds. Individuals who are making sounds can act as sound sources.
- Sound indicator It is a visual control used to indicate sound information in the virtual world.
- the visual control has one or more visual representations, and each visual representation is used to represent a kind of sound information.
- the types of sound information include: horizontal orientation , vertical orientation, sound type, sound volume, sound distance, and at least one of the action frequency of the sound source.
- Visual representation of the sound indicator refers to the display effect displayed on the sound indicator that can be visually captured by the user.
- Each visual expression includes: one or more combinations of shape, pattern, color, texture, text, animation effect, start display time, continuous display time, and blanking time. Different kinds of visual representations exist differently. Optionally, visual representations of different dimensions are simultaneously superimposed on the same sound indicator to present different information.
- Fig. 1 shows a schematic structural diagram of a computer system provided by an exemplary embodiment of the present application.
- the computer system 100 includes: a first terminal 120 , a server cluster 140 and a second terminal 160 .
- the first terminal 120 is installed and runs a game program supporting a virtual environment.
- the game program may be a first-person shooter game or a third-person shooter game.
- the first terminal 120 may be a terminal used by the first user, and the first user uses the first terminal 120 to operate the first virtual character in the virtual world to carry out activities, such activities include but not limited to: sprinting, climbing, squatting, walking quietly At least one of walking, crawling, squatting, single firing, continuous firing, Non-Player Character (NPC) shouting, rubbing grass, wounded shouting, near-death shouting, explosion, walking .
- NPC Non-Player Character
- the first virtual character is a first virtual character.
- the first terminal 120 is connected to the server cluster 140 through a wireless network or a wired network.
- the server cluster 140 includes at least one of a server, multiple servers, a cloud computing platform, and a virtualization center.
- the server cluster 140 is used to provide background services for applications supporting virtual environments.
- the server cluster 140 undertakes the main calculation work, and the first terminal 120 and the second terminal 160 undertake the secondary calculation work; or, the server cluster 140 undertakes the secondary calculation work, and the first terminal 120 and the second terminal 160 undertake the main calculation work work; or, among the server cluster 140, the first terminal 120 and the second terminal 160, a distributed computing architecture is used to perform collaborative computing.
- the second terminal 160 is installed and runs a game program supporting a virtual environment.
- the game program may be a first-person shooter game or a third-person shooter game.
- the second terminal 160 may be a terminal used by the second user, and the second user uses the second terminal 160 to operate the second virtual character located in the virtual world to perform activities, such activities include but not limited to: sprinting, climbing, squatting, walking quietly At least one of walking, crawling quietly, squatting quietly, single-shot firing, continuous firing, NPC shouting, rubbing grass, injured shouting, near-death shouting, explosion, and walking.
- the second virtual character is a second virtual character.
- the first virtual character and the second virtual character may belong to the same team, the same organization, have friendship or have temporary communication authority.
- the application programs installed on the first terminal 120 and the second terminal 160 are the same, or the same type of application programs on different platforms.
- the first terminal 120 may generally refer to one of the multiple terminals
- the second terminal 160 may generally refer to one of the multiple terminals. This embodiment only uses the first terminal 120 and the second terminal 160 as an example for illustration.
- the first terminal 120 and the second terminal 160 may be desktop devices or mobile devices.
- the types of mobile devices of the first terminal 120 and the second terminal 160 are the same or different, and the mobile devices include: smart phones, tablet computers, and include but not only All portable electronic devices limited to this.
- Fig. 2 shows a flow chart of a sound prompting method in a virtual world provided by an exemplary embodiment of the present application.
- the method can be performed by the first terminal 120 or the second terminal 160 shown in FIG. 1, and the first terminal 120 or the second terminal 160 can be collectively referred to as a terminal.
- the method includes the following steps:
- Step 202 Displaying the perspective screen of the first virtual character, compass information is displayed on the perspective screen, and the compass information includes a sequence of azimuth scales;
- the compass information (or compass control) is used to use the foothold of the first virtual character in the virtual world as a reference point to indicate the various horizontal orientations that the first virtual character faces in the virtual world.
- the horizontal orientation is represented by the longitude in the virtual world, such as 20 degrees east longitude, 160 degrees west longitude and so on.
- a first virtual character 10 is displayed on the perspective screen.
- the first virtual character 10 can be any movable object in the virtual world, for example, it can be a soldier in the virtual world.
- the movement wheel 12 is used to control the movement of the first virtual character 10 in the virtual world
- the skill button 14 is used to control the first virtual character 10 to release skills or use items in the virtual world.
- the compass information 16 displays a sequence of azimuth scales.
- the azimuth scale sequence may be a sequence of multiple azimuth scales.
- the azimuth scales in the azimuth scale sequence are used to indicate the horizontal orientation that the first virtual character faces in the virtual world.
- the azimuth scale sequence includes 7 azimuth scales: 165 degrees, south, 195 degrees, 215 degrees, southwest, 240 degrees, and 255 degrees.
- the orientation scale of 215 degrees is used to indicate the horizontal orientation directly in front of the first virtual character 10 .
- Step 204 controlling the activities of the first virtual character in the virtual world
- the user can control the first virtual character to move in the virtual world.
- the activities here may include various forms of activities, such as moving, releasing skills, using items, and so on. Different activities can have different control methods.
- the user can control the first virtual character 10 to move by moving the wheel 12, and the user can also control the first virtual character to release skills by pressing one or more preset skill buttons 14. Use items.
- the user can also control the first avatar through signals generated by long pressing, clicking, double-clicking and/or sliding on the touch screen.
- Step 206 When the first virtual character is active in the virtual world, if there is a first sound source in the surrounding environment of the first virtual character, display the first sound indicator based on the first orientation scale in the orientation scale sequence, The first sound indicator is used to indicate the horizontal orientation and vertical orientation of the first sound source.
- the surrounding environment of the first virtual character is a virtual environment within a three-dimensional spherical range with the first virtual character as the center and a preset distance as the radius.
- the surrounding environment of the first virtual character is a virtual environment centered on the first virtual character, with a preset distance as a radius and located within a three-dimensional hemispherical range on the ground plane.
- the first sound source may be a virtual element capable of emitting sound in the virtual world, such as a second virtual character (friendly or enemy or NPC), virtual vehicle, virtual flying object, various offensive weapons, virtual animals, etc. .
- the second virtual character may be other virtual characters in the virtual world except the first virtual character, and the number of the second virtual character is at least one. Since the virtual world is a digitally simulated environment, the sound in this application may refer to a sound event in the digital world, and the sound event is represented by a set of parameters.
- a set of parameters of a sound event includes, but is not limited to: the three-dimensional coordinates of the sound source in the virtual world, the type of the sound source, the type of material the sound source touches, the type of sound, the size of the original sound, and at least one of the equipment wearing conditions of the sound source. A sort of.
- first sound source in the surrounding environment of the first virtual character
- a virtual element also referred to as an individual in the virtual world
- the first sound indicator can be displayed based on the first azimuth scale in the sequence of azimuth scales.
- the first sound indicator is used to indicate the horizontal orientation and vertical orientation of the first sound source.
- the horizontal orientation is the orientation divided along the horizontal direction with the first virtual object as the center, such as the longitude in the virtual world.
- the vertical orientation is the orientation divided along the vertical direction with the first virtual object as the center, such as the pitch angle of the sound source (for example, the first sound source) relative to the first virtual character.
- the vertical orientation is represented by a vertical orientation scale, which is similar to latitude; or, the vertical orientation is represented by an altitude; or, since the avatar has limited space for activities in the vertical direction, the vertical orientation can be simplified or abstracted as: Upper orientation, middle orientation and lower orientation.
- the horizontal orientation of the first sound source may be the first horizontal orientation
- the vertical orientation of the first sound source may be the first vertical orientation
- the first horizontal orientation of the first sound source may be scaled by the first orientation
- the first vertical orientation of the first sound source may be represented by a first visual representation of the first sound indicator, which may be a shape, pattern, color, texture, text, and animation of the first sound indicator at least one of the effects.
- the terminal may display a first sound indicator with a first visual representation based on the first azimuth scale in the azimuth scale sequence, the center position of the first sound indicator is aligned with the first azimuth scale, and the first azimuth scale is used for The horizontal orientation of the first sound source is indicated, and the first visual representation is used to indicate the vertical orientation of the first sound source.
- the method provided by this embodiment displays the first sound indicator only based on the first azimuth scale in the azimuth scale sequence when the first sound source exists in the surrounding environment of the first virtual character, without
- the small map is used to prompt the position
- the first sound indicator can also indicate the first horizontal orientation and the first vertical orientation corresponding to the first sound source at the same time, so that the user can accurately judge the spatial position of the first sound source only by visual representation.
- it can also obtain enough effective spatial information for the sound source. Exemplarily as shown in FIG.
- the first sound indicator is displayed based on the first azimuth scale "Southwest” in the compass information 16 19.
- the first azimuth scale "southwest” is used to indicate that the horizontal azimuth of the first sound source is southwest.
- the shape of the first sound indicator 19 is used to indicate that the vertical orientation of the first sound source is the middle orientation.
- the first visual representation as a shape as an example, in conjunction with the implementation in (a) of FIG. Upper orientation; when the shape of the first sound indicator 19 is shuttle-shaped, it represents that the first sound source is located in the middle orientation of the first virtual character; If the shape is a lower triangle, it means that the first sound source is located at the lower part of the first virtual character.
- the first sound source is located at the upper position of the first virtual character; when the arrow on the left side of the first sound indicator 19 is a circle, it means that the first sound source is located at the middle position of the first virtual character; When the arrow on the left side of the instrument 19 is downward, it means that the first sound source is located at the lower position of the first avatar.
- the first sound indicator 19 includes three grids arranged vertically. If the uppermost grid of the three grids is filled with color, it means that the first sound source is located at the upper position of the first avatar; if the middle grid of the three grids is filled with color, it means that the first sound source is located at The middle position of an avatar; if the bottommost grid of the three grids is filled with color, it means that the first sound source is located at the lower position of the first avatar.
- the first visual representation as the shape of the first sound indicator 19 and additional numbers as an example, in conjunction with the implementation of (d) in FIG.
- 100m it means that the first sound source is located in the upper position of the first virtual character and the height above the ground is 100 meters; in the case of the shape of the first sound indicator 19 is shuttle-shaped, it means that the first sound source Located in the middle position of the first virtual character; when the shape of the first sound indicator 19 is a lower triangle and carries the number "-15m”, it means that the first sound source is located in the first virtual character. The lower part of the character and the height below the ground is 15 meters.
- the method provided in this embodiment can prompt various sound information of the first sound source based on the first sound indicator near the compass information when there is no mini-map control on the user interface.
- the sound can be multifaceted based on various visual representations on the first sound indicator, and occupy a very small screen area; in the absence of the first sound source , the number of Head Up Display (HUD) controls on the entire user interface will be as few as possible, so that the user interface is more concise and effective, and can bring users a more immersive program experience.
- HUD Head Up Display
- Fig. 5 shows a flow chart of a sound prompting method in a virtual world provided by an exemplary embodiment of the present application.
- the method can be performed by the first terminal 120 or the second terminal 160 shown in FIG. 1, and the first terminal 120 or the second terminal 160 can be collectively referred to as a terminal.
- the method includes the following steps:
- Step 202 Displaying the perspective screen of the first virtual character, compass information is displayed on the perspective screen, and the compass information includes a sequence of azimuth scales;
- the first virtual character is a virtual object that the first user is controlling.
- the perspective picture of the first virtual character is a picture obtained by observing the virtual world from the perspective of the first virtual character during the running of the application program in the terminal.
- the perspective picture of the first virtual character is a picture obtained by observing in the virtual world through the first-person perspective of the first virtual character.
- the first-person perspective of the first virtual character will automatically follow along with the movement of the virtual character in the virtual world, that is, when the position of the first virtual character in the virtual world changes, the first person of the first virtual character
- the first-person perspective changes simultaneously, and the first-person perspective of the first virtual character is always within a preset distance range of the first virtual character in the virtual world.
- the compass information includes a sequence of azimuth scales, and the azimuth scales in the azimuth scale sequence are used to indicate the horizontal orientation that the first virtual character is facing in the virtual world.
- the azimuth scales of various horizontal orientations that can be observed by the perspective of the first virtual character in the virtual world are displayed in the azimuth scale sequence, and the azimuth scales of horizontal orientations that cannot be observed under the current viewing angle may not be displayed in the azimuth scale. in the tick sequence.
- a direction scale within a preset range centered on the horizontal position right in front of the first virtual character is displayed in the sequence of direction scales.
- a first virtual character 10 a moving wheel 12 , a skill button 14 and compass information 16 are displayed on the perspective screen.
- the first virtual character 10 may be a soldier located in the virtual world.
- the movement wheel 12 is used to control the movement of the first virtual character 10 in the virtual world
- the skill button 14 is used to control the first virtual character 10 to release skills or use items in the virtual world.
- Compass information 16 is displayed with a sequence of azimuth scales.
- the azimuth scale sequence includes 7 azimuth scales: 165 degrees, south, 195 degrees, 215 degrees, southwest, 240 degrees, 255 degrees.
- the orientation scale of 215 degrees is used to indicate the horizontal orientation directly in front of the first virtual character 10 .
- Step 204 controlling the activities of the first virtual character in the virtual world
- the user can control the first virtual character 10 to move by moving the wheel 12 , and the user can also control the first virtual character to release skills or use items by pressing one or more preset skill buttons 14 .
- the user can also control the first avatar through signals generated by long pressing, clicking, double-clicking and/or sliding on the touch screen.
- Step 206 When the first virtual character is active in the virtual world, if there is a first sound source in the surrounding environment of the first virtual character, the first sound source is used to emit a first sound, determining a visual representation of the first sound indicator based on sound parameters of the first sound;
- the visual representation of the first audio indicator includes at least one of the following visual representations:
- a first visual representation for indicating the vertical orientation of the first sound
- a fifth visual representation for indicating the frequency of motion of the first sound.
- each visual representation is a different type of visual representation.
- Each visual expression is one of the shape, pattern, color, texture, text, animation effect, start display time, continuous display time, and blanking time of the first sound indicator. Different visual representations can be superimposed on the same sound indicator, and different visual representations are used to convey different sound information.
- the display start time is the moment when the first sound indicator appears on the user interface.
- the continuous display time is the total duration of displaying the first sound indicator on the user interface.
- the blanking time is a time period during which the first sound indicator is displayed on the user interface from the time when the transparency becomes low until it disappears.
- the first sound source exists in the surrounding environment of the first virtual character, the first sound source will trigger a sound event. If the first virtual character is the virtual character used by the user corresponding to the current terminal, and the first sound source is the sound source corresponding to other terminals, then other terminals will synchronize the sound event to the current terminal through the server; if the first sound source is the current client The sound source corresponding to the terminal, the current terminal generates the sound event.
- the sound event has a sound parameter
- the terminal may determine the visual representation of the first sound indicator according to the sound parameter of the first sound.
- Sound parameters include, but are not limited to: the type of the first sound source, the material of the first sound source, the equipment of the first sound source, the position of the first sound source, the sound type of the first sound, the volume of the first sound, At least one of the operating frequencies of the first sound source.
- a single active object i.e. a single sound source
- the sound with the loudest volume among the at least two sounds is determined as the second sound. a voice.
- the firing event is determined as the first sound event, the sound indicator of the firing event is displayed, and the sound indicator of the walking event is blocked.
- the first sound source emits a new sound event and the volume of the new sound event is greater than the volume of the current sound event
- only the sound indicator of the new sound event is displayed.
- the first avatar walks a few steps in place and then fires immediately. Since the volume of the firing event is greater than the volume of the walking event, the firing event is determined as the first sound event, the sound indicator of the walking event disappears, and only the sound of the firing event is displayed. Sound indicator.
- the method provided in this embodiment further reduces Improves the accuracy of sound effect prompts by eliminating unnecessary calculations for low-volume sound events.
- Step 208 displaying a first audible indicator with a visual representation based on the first bearing scale in the sequence of bearing scales;
- the terminal may display a first audio indicator having at least one visual representation based on the first bearing scale in the compass information.
- the first sound indicator may be displayed at a proper position of the user interface. For example, if the compass information is displayed above the user interface, a first sound indicator is displayed below the first azimuth scale in the compass information; if the compass information is displayed below the user interface, in the compass information The first audible indicator is displayed above the first bearing scale.
- the center position of the first audio indicator is aligned with the first azimuth scale. That is, the central axis of the first sound indicator is aligned with the first azimuth scale.
- the visual representation of the first sound indicator may include a first visual representation, and the first visual representation is used to indicate the vertical orientation of the first sound.
- the visual representation of the first sound indicator may include other visual representations while including the first visual representation, that is, the terminal may have the first visual representation based on the first orientation scale display and other visual representations of the first sound indicator.
- Other visual manifestations include at least one of the following visual manifestations:
- a fifth visual representation for indicating the frequency of motion of the first sound.
- the first visual representation includes n first visual representations corresponding to n vertical orientations one by one, where n is a positive integer greater than 1.
- the client terminal displays the first sound indicator of the i-th first visual representation based on the first azimuth scale in the compass information, where i is a positive integer not greater than n.
- the i-th first visual representation is used to indicate that the first sound source corresponds to the i-th vertical orientation.
- the i-th visual representation may include marking the first sound indicator with an altitude corresponding to the i-th vertical orientation at the height of the first sound source.
- the n kinds of vertical orientations may include an upper orientation, a middle orientation and a lower orientation, that is, to distinguish the upper, middle, and lower spaces through the first sound indicator.
- the function of the first visual representation is to enable the user to know the vertical orientation of the first sound source by observing the visual representation of the first sound indicator, and there can be one or more ways to visually reflect the vertical orientation of the first sound source .
- the first visual representation includes at least one of the following: the shape of the first sound indicator, the vertical orientation scale in the first sound indicator, the arrow in the first sound indicator, the text in the first sound indicator hint.
- this step includes one of the following three steps:
- the compass information 16 includes A first sound indicator with an upward shape is displayed below the first orientation scale "215" to indicate that the vertical orientation of the first sound is the upper orientation.
- the first sound indicator is displayed in a vertically symmetrical shape, and the first sound indicator is used to indicate that the vertical azimuth of the first sound source is the middle azimuth.
- the vertical orientation of the first sound source is the lower orientation.
- the first visual representation includes a vertical bearing scale
- displaying a first sound indicator with a vertical bearing scale based on the first bearing scale in the sequence of bearing scales the vertical bearing scale being used to indicate the vertical bearing of the first sound source
- the vertical azimuth scale is used to indicate the elevation angle of the first sound source relative to the first avatar. That is, the vertical orientation scale is represented by the pitch angle of the first sound source relative to the first avatar.
- a first sound indicator with an arrow is displayed based on the first azimuth scale in the azimuth scale sequence, and the arrow direction of the arrow is used to indicate the vertical azimuth of the first sound source.
- an upward arrow represents an upper position whose height is higher than the plane where the first virtual character is located
- a downward arrow represents a lower position whose height is lower than the plane where the first virtual character is located.
- a first sound indicator with a text prompt is displayed based on the first orientation scale in the orientation scale sequence, and the text prompt is used to indicate the vertical orientation of the first sound source.
- a combination of at least two of the above shapes, vertical orientation scales, arrows, and text prompts can also be implemented as the first visual representation, which is not limited.
- the second visual representation is used to indicate the sound type of the first sound.
- the manner of determining the visual expression of the first sound indicator according to the sound parameter of the first sound may be to determine the second visual expression of the first sound indicator according to the sound type of the first sound.
- the second visual representation includes the color of the first sound indicator, that is, different colors of the first sound indicator are used to indicate the sound type of the first sound. For example, use white to represent avatar footsteps/NPC shouts, and red to represent gunshots/explosions.
- the sound indicator 19a is white, representing the sound of footsteps; the sound indicator 19b is red, representing the sound of gunfire.
- the second visual representation includes an icon style of the first sound indicator, that is, the first sound indicator adopts a different icon style to indicate the sound type of the first sound.
- the gun icon style 191 is used to represent the sound type "gunshot”; the footprint icon style 192 is used to represent the sound type "footsteps”; the human head icon style 193 is used to represent the sound type "human voice”;
- the sound type "explosion” is represented by the explosion icon style 194.
- the second visual representation is the duration of the display of the first audio indicator.
- the continuous display duration includes: a first duration when the first sound indicator is displayed in an opaque manner, and a second duration (that is, a blanking duration) when the display is canceled after changing from an opaque manner to a transparent manner. That is, different continuous display durations are used to indicate the sound type of the first sound. For example, different sound types correspond to different first durations, or different sound types correspond to different second durations, or different sound types correspond to different first durations and second durations.
- the third visual representation is used to indicate the loudness of the first sound.
- the sound level of the first sound refers to the sound level of the first sound arriving at the first virtual character, which is used to simulate the sound level of the first sound actually heard by the first virtual character, rather than the sound level of the first sound.
- Original sound size the first sound indicator is represented by the sound wave amplitude spectrum
- the third visual representation includes the magnitude of the first sound wave amplitude spectrum
- the sound parameters include the sound size
- the first sound is determined according to the sound parameters of the first sound
- the way of visual representation of the indicator may be to determine the amplitude of the first sound wave amplitude spectrum according to the magnitude of the first sound arriving at the first virtual character.
- the sound indicator is represented by a sound wave amplitude spectrum, and the height of the sound wave amplitude spectrum is used to represent the sound wave amplitude.
- two sound indicators 19a and 19b are displayed on the user interface.
- the amplitude of the sound wave of the sound indicator 19a is smaller than the amplitude of the sound wave of the sound indicator 19b
- the magnitude of the sound corresponding to the sound indicator 19a is smaller than the magnitude of the sound corresponding to the sound indicator 19b. That is, the sound of footsteps and gunshots at the same distance.
- the acoustic indicator 19a on the left has a smaller amplitude of the sound wave
- the acoustic indicator 19b on the right has a greater amplitude of the sound wave.
- the sound indicator is represented by a sound wave amplitude spectrum, and the height of the sound wave amplitude spectrum is used to represent the sound wave amplitude. For two sounds with different sound magnitudes, different sound wave amplitudes are used to represent them.
- Fig. 9 shows that in the distance range of 100-200m, when the gunshot becomes weaker with the distance, the amplitude of the first sound indicator will also become weaker.
- the fourth visual representation is used to indicate the sound distance of the first sound.
- the fourth visual representation is represented by the start display time of the first sound indicator. That is, when the sound event of the first sound is received, the first sound indicator will not be displayed immediately, but will be displayed after a certain time delay. The length of the delay is related to the sound distance, which is the distance between the first sound source and the first avatar.
- the fifth visual representation is used to indicate the frequency of action of the first sound.
- the first sound indicator is represented by a sound wave amplitude spectrum
- the fifth visual representation includes a vibration frequency of the sound wave amplitude spectrum
- the height of the sound wave amplitude spectrum is used to represent the sound wave amplitude
- the sound wave amplitude spectrum can be dynamically scaled and changed to Indicates the jitter of the sound wave. Since the range of motion for generating the first sound is different, the frequency of the motion of the first sound is also different.
- the sound parameters include the operating frequency of the first sound source, and according to the sound parameters of the first sound, the way of determining the visual performance of the first sound indicator may be according to when the first sound source emits the first sound
- the action frequency of determines the jitter frequency of the amplitude spectrum of the sound wave.
- the sound wave amplitude spectrum will be displayed in white, and the jitter frequency of the sound wave amplitude spectrum will be higher to show the rush of running.
- the jitter frequency of the sound wave amplitude spectrum will be smaller to show the feeling of moving slowly and distinguish it from running.
- Step 210 If there is a second sound source in the surrounding environment of the first virtual character and the horizontal orientation of the second sound source is outside the range of visible orientation, based on the edge orientation scale closest to the horizontal orientation of the second sound source in the orientation scale sequence displaying a second sound indicator, the second sound indicator is used to indicate that there is a second sound source along the horizontal direction indicated by the edge direction scale;
- the horizontal orientation also referred to as the second horizontal orientation
- a second sound indicator is displayed based on the second orientation scale in the orientation scale sequence, for example, prompted by a second sound wave amplitude spectrum The horizontal and vertical orientation of the second sound.
- the second sound indicator is displayed based on the edge orientation scale closest to the horizontal orientation of the second sound source in the orientation scale sequence, to Indicates the presence of a second sound source or second sound.
- a second sound indicator 19 is displayed based on the edge azimuth scale on the compass information 18 .
- the second sound indicator 19 may be aligned with the edge azimuth scale, or may exceed the edge azimuth scale.
- the second sound indicator 19 is used to indicate that there is a second sound source in the invisible area on the right side of the first virtual character 10 , and the second sound source emits a second sound.
- the second sound indicator 19 has at least one of the above five visual representations.
- the second sound indicator 19 has an equal or less variety of visual representation than the first sound indicator.
- the second sound indicator 19 only uses color or icon style to display the sound type of the second sound.
- Step 212 Cancel or ignore displaying the (first) voice prompter when the first avatar enters the deaf state.
- the terminal will cancel the display of the first sound prompter below the azimuth scale sequence on the compass information, that is, cancel the display of all sound prompters corresponding to the first virtual character.
- the first sound thrown is a grenade or a bomb etc.
- the grenade explodes within close range of the first avatar, the first avatar will enter a deaf state.
- the first sound indicator when the first sound source exists in the surrounding environment of the first avatar, the first sound indicator is displayed only based on the first azimuth scale in the compass information, without small
- the map prompts the position, and the first sound indicator can also indicate the first horizontal orientation and the first vertical orientation corresponding to the first sound source at the same time, so that the user can accurately judge the spatial position of the first sound source only by visual representation.
- it can also obtain enough effective spatial information for the sound source.
- the method provided in this embodiment also indicates the vertical orientation, sound type, sound size, sound distance, and action frequency of the sound through different visual representations that exist simultaneously on the first sound indicator, so that the user only needs to rely on different visual representations.
- the vertical orientation, sound type, sound size, sound distance and action frequency of the first sound can be obtained through the display, and the effective information of the first sound can also be obtained in the hearing-limited scene without external sound or using headphones.
- the display space on the user interface can be saved.
- there is no small map to prompt the position, and only multiple visual representations of the first sound indicator of the compass are used to prompt the sound in various ways, which can bring a more immersive gaming experience to the user.
- the method provided in this embodiment also improves the user’s accuracy in judging the spatial position of the first sound source only by visual representation by canceling or ignoring the display of the (first) sound prompter when the first virtual character enters the deaf state. sex.
- step 205 may also optionally include at least one of the following steps:
- the pitch angle (picth) of the first sound source relative to the first virtual character; based on the value range of the pitch angle, determine the vertical angle of the first sound source position.
- the vertical orientation of the first sound source is determined by using the pitch angle, when the pitch angle of the first sound source relative to the first virtual character is between -17° to 17° or 163° to 180° or Within the range of -163° to -180°, determine that the vertical range of the first sound source is relative to the middle orientation of the first avatar; when the pitch angle of the first sound source relative to the first avatar is between 17° and 163° Within the range of , determine that the vertical orientation of the first sound source is relative to the upper orientation of the first virtual character; when the pitch angle of the first sound source relative to the first virtual character is in the range of -17° to -163°, determine The vertical orientation of the first sound source is relative to the lower orientation of the first avatar.
- the color of the first sound indicator is determined based on the sound type of the first sound.
- Table 1 shows the first correspondence between sound types and colors.
- the terminal can determine the color corresponding to the sound type of the first sound, and determine the color as the color of the first sound indicator.
- the icon style of the first sound indicator is determined according to the sound type of the first sound.
- Table 2 shows the second corresponding relationship between sound types and icon styles.
- the terminal can determine the icon style corresponding to the sound type of the first sound by querying the second correspondence, and determine the icon style as the icon style of the first sound indicator.
- the third visual representation includes the amplitude magnitude of the first sound wave magnitude spectrum:
- Step 1 Determine the arrival sound size of the first sound according to the original sound size and the influence parameter of the first sound, and the influence parameter includes at least one of the following parameters:
- the sound-related equipment worn by the first virtual character includes: at least one of different types of helmets and earphones.
- the type of equipment and the wearing condition of the equipment will affect the sound level of the first voice.
- the magnitude of the arriving sound determined when the first virtual character is wearing headphones is greater than the magnitude of the arriving sound determined when the first virtual character is not wearing headphones;
- the magnitude of the arrival sound determined in the case of the helmet is smaller than the magnitude of the arrival sound determined in the case of the first virtual character not wearing the helmet.
- the sound-related equipment worn by the second virtual character includes: at least one of different types of firearms, different ammunition types, and mufflers.
- the type of equipment and the wearing condition of the equipment will affect the sound level of the first voice.
- the magnitude of the arriving sound determined when the second virtual character wears the muffler is smaller than the magnitude of the arriving sound determined when the second virtual character does not wear the muffler.
- the sound of the first virtual character's shoes touching different grounds will affect the volume of the sound
- the sound of the first virtual character's shoes of different materials touching the same ground will affect the volume of the sound
- reaching sound size (original sound size*influence coefficient of original sound size)*(1-sound distance/(maximum effective distance of sound*influence coefficient of maximum distance)).
- the magnitude of the original sound is the magnitude of the sound emitted by the first sound at the first sound source.
- the influence coefficient of the original sound level is related to the above-mentioned influence parameters, and is usually set as an empirical value by a designer.
- the influence coefficient at the maximum distance of the sound is used to indicate the sound attenuation characteristics, which is related to the above influence parameters and is usually set by the designer as an empirical value.
- the first virtual character wears a sound-isolated helmet and hears a muffler gunshot from 75m.
- the original sound volume of the gunshot is 100, and the maximum effective distance is 150m.
- the influence coefficient of the silencer on the original sound level is 1, and the influence coefficient on the maximum distance is 0.5; the influence coefficient of the helmet on the original sound level is 0.5, and the influence coefficient on the maximum distance is 0.5.
- Step 2 Determine the amplitude of the first sound wave amplitude spectrum according to the arrival sound level of the first sound at the first virtual character.
- the client maps the arrival sound magnitude of the first sound to the sound wave magnitude of the first sound wave magnitude spectrum through the conversion curve of "sound magnitude-ripple magnitude".
- the fourth visual representation includes the start display time of the first sound indicator
- determine the start display time of the first sound indicator start displaying The time is later than the generation time of the first sound.
- start display time first sound generation time + sound distance/sound propagation speed in the virtual environment
- the first sound generation time is the time when the first sound source emits the first sound
- the sound distance is the distance between the first sound source and the first virtual character
- the sound propagation speed in the virtual environment is usually set by the designer as experience value.
- the first sound indicator is represented by a sound wave amplitude spectrum
- the fifth visual representation includes the vibration frequency of the sound wave amplitude spectrum
- the vibration frequency of the sound wave amplitude spectrum is determined according to the action frequency when the first sound source emits the first sound.
- the sound wave amplitude spectrum when the first virtual character is running, the sound wave amplitude spectrum will be displayed in white, and the jitter frequency of the sound wave amplitude spectrum will be larger to show the rush of running.
- the jitter frequency of the sound wave amplitude spectrum When the first avatar crouches with a lowered step, the jitter frequency of the sound wave amplitude spectrum will be smaller, so as to show the feeling of moving slowly and distinguish it from running.
- FIG. 12 shows a configuration interface 1200 for the impact coefficient of the helmet headset on the arrival sound level of the first sound.
- the configuration interface 1200 includes three configuration items: increasing the sound wave amplitude coefficient 1201, the sound size influence coefficient 1202 and the sound maximum distance influence coefficient 1203.
- the increase sound wave amplitude coefficient 1201 configuration item is used to configure the pair
- the sound size influence coefficient configuration item 1202 is used to configure the influence coefficient of the arrival sound size of the first sound when the first avatar wears a helmet headset
- the sound maximum distance influence coefficient 1203 configuration item is used It is used to configure the influence coefficient on the maximum distance that the first sound can propagate when the first avatar wears the helmet earphone.
- the sound volume influence coefficient 1202 and the sound maximum distance influence coefficient 1203 configuration items are both 1, and when the first avatar wears a helmet headset, it has no influence on the arriving sound volume of the first sound and the maximum distance of the first sound.
- FIG. 13 shows a configuration interface 1300 for the influence coefficient of the muffler on the magnitude of the arrival sound of the first sound.
- the configuration interface 1300 includes two configuration items, the muffler sound size influence coefficient 1301 and the muffler sound maximum distance influence coefficient 1302.
- the muffler sound size influence coefficient 1301 configuration item is used to configure the arrival of the first sound when the first avatar wears the muffler Influence coefficient of sound size, maximum distance influence coefficient of muffler sound 1302
- the configuration item is used to configure the influence coefficient of the maximum distance that the first sound can travel when the first avatar wears a muffler.
- the sound volume influence coefficient 1301 of the muffler is 1, and when the first avatar wears the muffler, it has no influence on the arrival volume of the first sound.
- FIG. 14 shows a configuration interface 1400 of the impact coefficient of the stepping material on the magnitude of the arrival sound of the first sound.
- the configuration interface 1400 includes two configuration items for the marble metal material 1410, the sound size influence coefficient 1401 and the sound maximum distance influence coefficient 1402.
- the sound size influence coefficient 1401 configuration item is used to configure when the first avatar steps on the ground made of marble metal
- the influence coefficient on the arrival sound level of the first sound the sound maximum distance influence coefficient 1402 configuration item is used to configure the influence coefficient on the maximum distance that the first sound can travel when the first avatar steps on the ground made of marble and metal.
- the influence coefficient 1401 of the sound size and the influence coefficient 1402 of the maximum distance of the sound are both 1, and when the first avatar steps on the ground made of marble and metal, it has no effect on the arrival of the first sound and the maximum distance of the first sound .
- FIG. 15 shows a configuration interface 1500 for different sound types and basic configuration parameters.
- the configuration interface 1500 includes sprinting 1501, climbing 1502, squatting 1503, walking quietly 1504, climbing 1505 quietly, squatting quietly 1506, firing single shots 1507, firing bursts 1508, NPC shouting 1509, rubbing grass 1510, Injured shout 1511, dying cry 1512, explosion 1513, and walking 1514 are configured separately for different sound types, including base sound size 1520, sound icon display time 1530, sound icon fade time 1540, icon index 1550, sound icon refresh interval 1560, sound Maximum effective range 1570, sound waveform jitter frequency 1580 and opacity curve 1590 are eight configurable parameters.
- FIG. 16 shows a configuration interface 1600 of configuration parameters corresponding to different firearm types, which are changed by addition and subtraction based on the firing type.
- the configuration interface includes a configuration page 1600 for basic parameters of single-shot firing, a parameter configuration page 1601 for pistols and a parameter configuration page 1602 for bolt-action rifles based on single-shot firing.
- the single-shot configuration page includes basic sound size 1620, sound icon display time 1630, sound icon fade time 1640, icon index 1650, sound icon refresh interval 1660, sound maximum effective range 1670, and sound waveform jitter frequency 1680.
- the maximum effective range of sound on the configuration page of the pistol is 1670 configuration item -100 meters
- the pistol is 100 meters smaller than the maximum effective range of 1670 configuration item 200 meters of single-shot fire
- the maximum effective range of pistol sound is 100 meters
- the basic sound Size 1620, sound icon display time 1630, sound icon fade time 1640, sound icon refresh interval 1660, and sound waveform jitter frequency 1680 are the same as single-shot firing.
- the basic sound size 1620 configuration item is 20
- the sound icon fade time 1640 configuration item is 0.3
- the bolt-action rifle is 20 larger than the single-shot basic sound size 1620 configuration item
- the sound icon fade time is 1640 configuration
- the item length is 0.3
- the maximum effective range of the bolt-action rifle's sound is 1670
- the sound icon display time is 1630
- the sound icon refresh interval is 1660
- the sound waveform jitter frequency is 1680, which are the same as single-shot firing.
- FIG. 17 shows a configuration interface 1700 for general configuration parameters.
- the configuration interface 1700 includes the maximum number of sound ripples displayed 1710, Icon display angle threshold 1720, upper angle threshold 1730, lower angle threshold 1740, war sound
- the icon display angle threshold 1720 configuration item is 90°, and by default, the sound icon of the effective sound source is displayed in the coverage area of the first avatar camera rotated 90° left and right; the mapping curve 1770 of sound ripple height and sound size, the horizontal axis is sound Intensity, the number axis is the waveform height, which can convert the arrival sound level of the first sound into the mapping curve of the first sound.
- the method provided by this embodiment can display the first sound indicator based on the first azimuth scale in the compass information when the first sound source exists in the surrounding environment of the first avatar, and can pass the second A sound indicator simultaneously indicates the first horizontal orientation and the first vertical orientation corresponding to the first sound source, so that the user can accurately judge the spatial position of the first sound source only by visual representation, without external sound or earphones In a restricted scene, enough effective spatial information can also be obtained for the sound source.
- the method provided in this embodiment can also further distinguish the properties of the first sound source according to the amplitude of the sound indicator and the sound wave amplitude spectrum below it, the jitter frequency, and the duration; determine the first sound source by comparing the sound source angle with the pitch angle
- the vertical orientation of the sound source in the virtual world improves the prompt effect on the sound volume, sound frequency and sound type.
- Fig. 18 shows a flow chart of a sound prompting method in a virtual world provided by an exemplary embodiment of the present application.
- the method can be performed by the first terminal 120 or the second terminal 160 shown in FIG. 1, and the first terminal 120 or the second terminal 160 can be collectively referred to as a terminal.
- the method includes the following steps:
- Step 1802 Obtain the parameters of the first sound
- the terminal acquires parameters of the first sound emitted by the first sound source. In other words, the terminal acquires parameters of the sound event of the first sound emitted by the first sound source.
- Step 1804 Identify the sound type of the first sound
- the sound type of the first sound is determined according to the sound parameters of the first sound acquired by the terminal, and the second visual representation carried by the corresponding first sound indicator is determined according to different sound types.
- the second visual representation is color
- the first sound is judged to be the footsteps/NPC shouts of the virtual character according to the sound parameters of the first sound acquired by the terminal, and then white is used to represent the first sound indicator; if If it is judged that the first sound is a gunshot/explosion sound, then the first sound indicator is represented by red.
- the second visual representation is an icon style
- the type of the first sound is judged according to the sound parameters of the first sound acquired by the terminal, and a different icon style corresponding to the first sound is used to represent the first sound indicator. For example, if it is judged that the first sound is the footsteps of the avatar, then the first sound indicator is represented by the footprint icon style; if it is judged that the first sound is gunshot, then the first sound indicator is represented by the gun icon style.
- Step 1806a Calculate the arriving sound level according to the distance between the first sound source and the first virtual character configured according to the type of the first sound and the original sound level of the first sound source;
- the sound event of the first sound carries the three-dimensional coordinates of the first sound source.
- the terminal can calculate the distance between the first sound source and the first virtual character by calculating the distance between the three-dimensional coordinates of the first sound source and the three-dimensional coordinates of the first virtual character.
- the sound size will attenuate with the distance, the longer the distance between the first sound source and the first avatar, the smaller the sound size of the first sound; the distance between the first sound source and the first avatar The shorter it is, the louder the first sound will be.
- the sound event of the first sound also carries the original sound magnitude of the first sound source, and the terminal attenuates the original sound magnitude of the first sound by using the distance as an influencing parameter.
- Step 1806b Calculate the size of the arrival sound according to the influencing parameters such as tactical props and equipment;
- the terminal also determines the corresponding influence coefficient according to the influence parameters such as tactical props and equipment, and then calculates the final sound level of the first sound.
- Exemplary, tactical props and equipment include:
- the sound-related equipment worn by the first virtual character includes: at least one of different types of helmets and earphones.
- the type of equipment and the wearing condition of the equipment will affect the sound level of the first voice.
- the sound-related equipment worn by the second virtual character includes: at least one of different types of firearms, different ammunition types, and mufflers.
- the type of equipment and the wearing condition of the equipment will affect the sound level of the first voice.
- the sound of the first virtual character's shoes touching different grounds will affect the volume of the sound
- the sound of the first virtual character's shoes of different materials touching the same ground will affect the volume of the sound
- reaching sound size (original sound size*influence coefficient of original sound size)*(1-sound distance/(maximum effective distance of sound*influence coefficient of maximum distance)).
- the magnitude of the original sound is the magnitude of the sound emitted by the first sound at the first sound source.
- the influence coefficient of the original sound level is related to the above-mentioned influence parameters, and is usually set as an empirical value by a designer.
- the influence coefficient at the maximum distance of the sound is used to indicate the sound attenuation characteristics, which is related to the above influence parameters and is usually set by the designer as an empirical value.
- the first virtual character wears a sound-isolated helmet and hears a muffler gunshot from 75m.
- the original sound volume of the gunshot is 100, and the maximum effective distance is 150m.
- the influence coefficient of the silencer on the original sound level is 1, and the influence coefficient on the maximum distance is 0.5; the influence coefficient of the helmet on the original sound level is 0.5, and the influence coefficient on the maximum distance is 0.5.
- step 1806a and step 1806b can be calculated at the same time, or the tactical props.
- the sound level before the equipment is affected is used as the intermediate value, and then the final sound level after being affected by the tactical props and equipment is calculated.
- Step 1808 Determine whether the horizontal azimuth of the first sound source is within the visible azimuth range corresponding to the azimuth scale sequence displayed in the compass information;
- the compass information includes a sequence of azimuth scales, and the azimuth scales in the azimuth scale sequence are used to indicate the horizontal orientation that the first virtual character is facing in the virtual world.
- the azimuth scales of various horizontal orientations that can be observed by the perspective of the first virtual character in the virtual world are displayed in the azimuth scale sequence, and the azimuth scales of horizontal orientations that cannot be observed under the current viewing angle may not be displayed in the azimuth scale. in the tick sequence.
- a direction scale within a preset range centered on the horizontal position right in front of the first virtual character is displayed in the sequence of direction scales.
- step 1810 executes step 1810 to represent the sound in the form of a second sound indicator; if the horizontal azimuth of the first sound source is in the compass information Within the visible azimuth range corresponding to the displayed azimuth scale sequence, step 1812 is executed to calculate whether it is above/below the first avatar according to the pitch angle.
- Step 1810 express sound in the form of a second sound indicator
- the second sound indicator is displayed based on the edge azimuth scale closest to the second horizontal azimuth in the azimuth scale sequence, and The sound is determined to be a second sound or a second sound source, indicated by a second sound indicator.
- Step 1812 Calculate and judge whether the first sound source is above/below the first virtual character according to the pitch angle calculation
- step 1814 When the first sound source is above/below the first avatar, perform step 1814 to present the sound in the form of the first sound indicator above or below, that is, when the first sound source is above the first avatar, display the upper The upper information is displayed in the form of the first sound indicator; when the first sound source is below the first virtual character, the lower information is displayed in the form of the first sound indicator below; when the first sound source is not under the first virtual character
- step 1816 execute step 1816 to express the sound in the form of the first sound indicator in the middle, that is, the first sound source is in the middle of the first virtual character, and display the middle information in the form of the first sound indicator in the middle .
- the pitch angle of the first sound source relative to the first avatar when the pitch angle of the first sound source relative to the first avatar is in the range of -17° to 17° or 163° to 180° or -163° to -180°, determine The first character is in the middle of the first virtual character; when the pitch angle of the first sound source relative to the first virtual character is in the range of 17° to 163°, it is determined that the first character is above the first virtual character; The pitch angle of the sound source relative to the first virtual character is in the range of -17° to -163°, and it is determined that the first character is below the first virtual character.
- Step 1814 present the sound in the form of the first sound indicator above or below;
- Step 1816 present the sound in the form of the middle first sound indicator
- Step 1818 determine whether the first sound is a gunshot
- step 1822 is executed to differentiate and present the first voice according to the character and other voice types.
- Step 1820 Differentiate and express the first sound according to the type of gun sound
- An exemplary terminal determines that the first sound is a gunshot according to the sound parameters of the first sound, confirms that the first sound indicator of the first sound is red or confirms that it is an icon style of a firearm, and then determines the first sound according to the first sound parameter.
- the sound wave amplitude, sound wave vibration frequency and sound wave duration of the sound indicator are included in the first sound parameter.
- Step 1822 Differentiate and express the first sound according to the type of the avatar and other sounds.
- An exemplary terminal determines that the first sound source is a virtual character according to the sound parameters of the first sound, and will confirm that the first sound indicator of the first sound is white or confirmed as a footprint icon style or a human head icon style, and then according to the first sound
- the parameters determine the amplitude of the sound wave, the frequency of the sound wave vibration and the duration of the sound wave of the first sound indicator.
- the terminal obtains the sound parameters of the first sound to identify the type of the first sound, and according to the distance between the first sound source and the first virtual character and the first sound source The original sound level of the first sound is calculated for the arrival sound level of the first sound.
- the horizontal and vertical azimuths of the first sound are determined, thereby improving the accuracy of the sound level and sound. Cue effects in terms of frequency and sound type.
- Fig. 19 shows a schematic structural diagram of a sound prompting device in a virtual world provided by an exemplary embodiment of the present application.
- the device can be implemented as all or a part of computer equipment through software, hardware or a combination of the two, and the device 1900 includes:
- the display module 1901 is used to display the angle of view screen of the first virtual character, compass information is displayed on the angle of view screen, and the compass information includes a sequence of azimuth scales, and the azimuth scales in the sequence of azimuth scales are used to indicate the first avatar A horizontal direction that the virtual character faces in the virtual world;
- a control module 1902 configured to control the activities of the first virtual character in the virtual world
- the display module 1901 is configured to, when the first virtual character is active in the virtual world, if there is a first sound source in the surrounding environment of the first virtual character, based on the azimuth scale sequence
- the first azimuth scale in displays a first sound indicator, and the first sound indicator is used to indicate the horizontal azimuth and vertical azimuth corresponding to the first sound source.
- the display module 1901 is configured to display a first sound indicator with a visual representation based on the first azimuth scale in the sequence of azimuth scales, and the first sound indicator The central position of the indicator is aligned with the first azimuth scale, the first azimuth scale is used to indicate the horizontal azimuth of the first sound source, and the visual representation of the first sound indicator is used to indicate the first sound source The vertical orientation of the source.
- the visual representation of the first sound indicator includes a first visual representation
- the first visual representation is used to indicate the vertical orientation of the first sound source
- the first The visual representation includes at least one of: the shape of the first sound indicator; a vertical orientation scale in the first sound indicator; an arrow in the first sound indicator; text prompt.
- the first visual representation includes: n first visual representations, the n first visual representations are in one-to-one correspondence with n vertical orientations, and n is a positive integer greater than 1 ;
- the display module 1901 is configured to display a first sound indicator with an i-th first visual representation based on the first azimuth scale in the compass information, and the i-th first visual representation is used to indicate the The first sound source corresponds to the i-th vertical orientation, and i is a positive integer not greater than n.
- the vertical orientation includes: upper orientation, middle orientation and lower orientation
- the display module 1901 is configured to display a first sound indicator in an upward shape based on the first azimuth scale in the azimuth scale sequence, and the first sound indicator is used to indicate the vertical direction of the first sound source.
- the azimuth is the upper azimuth; or, based on the first azimuth scale in the azimuth scale sequence, a first sound indicator in a vertically symmetrical shape is displayed, and the first sound indicator is used to indicate the first sound source or, based on the first azimuth scale in the sequence of azimuth scales, a first audible indicator in a downward shape is displayed, the first audible indicator being used to indicate the first
- the vertical orientation of the sound source is the lower orientation.
- the display module 1901 is configured to display the first sound indicator with the vertical azimuth scale based on the first azimuth scale in the azimuth scale sequence, the vertical an azimuth scale for indicating a pitch angle of the first sound source relative to the first avatar; or, displaying a first sound indicator with the arrow based on the first azimuth scale in the sequence of azimuth scales , the arrow direction of the arrow is used to indicate the vertical orientation of the first sound source; or, based on the first orientation scale in the orientation scale sequence to display the first sound indicator with the text prompt, the The text prompt is used to indicate the vertical orientation of the first sound source.
- the first sound source is used to emit a first sound
- the visual representation of the first sound indicator also includes other visual representations, wherein the first visual representation and the The other visual representations are different types of visual representations, and the other visual representations include at least one of the following visual representations:
- a fifth visual representation for indicating the frequency of motion of said first sound.
- the device also includes:
- a determining module 1903 configured to determine the visual representation of the first sound indicator according to the sound parameters of the first sound.
- the sound parameter includes a sound type
- the determining module 1903 is configured to:
- the second visual representation of the first sound indicator is determined based on a sound type of the first sound.
- the second visual representation includes a color of the first sound indicator or an icon style of the first sound indicator.
- the first sound indicator is represented by a sound wave amplitude spectrum
- the third visual representation includes the magnitude of the first sound wave amplitude spectrum
- the sound parameters include sound magnitude
- the determination module 1903 is configured to determine the amplitude of the first sound wave amplitude spectrum according to the arrival sound level of the first sound at the first virtual character.
- the determination module 1903 is configured to determine the magnitude of the arriving sound of the first sound according to the magnitude of the original sound of the first sound and an influence parameter, and the magnitude of the influence parameter Include at least one of the following parameters:
- the equipment worn by the second virtual character In the case where the first sound source is a second virtual character, the equipment worn by the second virtual character;
- the material of the first sound source or the material that the first sound source comes into contact with is the material of the first sound source or the material that the first sound source comes into contact with.
- the arriving sound level determined when the first virtual character is wearing headphones is larger than that when the first virtual character is not wearing headphones.
- the size of the arriving sound determined in the case of headphones; or, for the same original sound size, the size of the arriving sound determined when the first avatar is wearing a helmet is smaller than that when the first avatar is not wearing a helmet.
- the magnitude of the arriving sound determined when the helmet is worn; or, for the same original sound magnitude, the magnitude of the arriving sound determined when the second virtual character is wearing a muffler is smaller than that determined in the first virtual character
- the fourth visual representation includes the start display time of the first sound indicator, the sound parameters include sound propagation speed, and the determining module 1903 is configured to The sound propagation speed between the first sound source and the first virtual character determines the start display time of the first sound indicator, and the start display time is later than the generation time of the first sound.
- the first sound indicator is represented by a sound wave amplitude spectrum
- the fifth visual representation includes the vibration frequency of the sound wave amplitude spectrum
- the sound parameter includes sound propagation speed
- the determination module 1903 is configured to determine the vibration frequency of the sound wave amplitude spectrum according to the action frequency when the first sound source emits the first sound.
- the visual representation of the first sound indicator includes: at least one of shape, pattern, color, texture, text, animation effect, start display time, continuous display time, and blanking time A sort of.
- the azimuth scale sequence in the compass information corresponds to the visible azimuth range of the first virtual character
- the display module 1901 is configured to There is a second sound source in the surrounding environment and the horizontal azimuth of the second sound source is outside the visible azimuth range, based on the edge azimuth scale displaying in the azimuth scale sequence closest to the horizontal azimuth of the second sound source
- a second sound indicator, the second sound indicator is used to indicate that the second sound source exists along the horizontal direction indicated by the edge direction scale.
- the display module 1901 is configured to cancel displaying the first sound prompter when the first virtual character enters a deaf state.
- the determining module 1903 is configured to, when the first sound source emits at least two sounds and the time difference between the generation times of the at least two sounds is less than a threshold, set the The sound with the loudest volume among the at least two sounds is determined as the first sound.
- the present application also provides a computer device, the computer device includes a processor and a memory, at least one instruction is stored in the memory, at least one instruction is loaded and executed by the processor to realize the sound prompt of the virtual world provided by the above method embodiments method.
- the computer device may be a terminal as shown in FIG. 20 below.
- Fig. 20 shows a structural block diagram of a computer device 2000 provided by an exemplary embodiment of the present application.
- the computer device 2000 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, moving picture expert compression standard audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, moving picture expert compression standard Audio level 4) player, laptop or desktop computer.
- the computer device 2000 may also be called user equipment, portable terminal, laptop terminal, desktop terminal, and other names.
- a computer device 2000 includes: a processor 2001 and a memory 2002 .
- the processor 2001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
- Processor 2001 can adopt at least one hardware form in DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish.
- the processor 2001 can also include a main processor and a coprocessor, and the main processor is a processor for processing data in a wake-up state, also called a CPU (Central Processing Unit, central processing unit); the coprocessor is Low-power processor for processing data in standby state.
- CPU Central Processing Unit, central processing unit
- the coprocessor is Low-power processor for processing data in standby state.
- the processor 2001 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content to be displayed on the display screen.
- the processor 2001 may further include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
- AI Artificial Intelligence, artificial intelligence
- Memory 2002 may include one or more computer-readable storage media, which may be non-transitory.
- the memory 2002 may also include high-speed random access memory, and non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices.
- non-transitory computer-readable storage medium in the memory 2002 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 2001 to realize the virtual world provided by the method embodiments in this application voice prompt method.
- the computer device 2000 may optionally further include: a peripheral device interface 2003 and at least one peripheral device.
- the processor 2001, the memory 2002, and the peripheral device interface 2003 may be connected through buses or signal lines.
- Each peripheral device can be connected to the peripheral device interface 2003 through a bus, a signal line or a circuit board.
- the peripheral device includes: at least one of a radio frequency circuit 2004 , a display screen 2005 , a camera 2006 , an audio circuit 2007 , a positioning component 2008 and a power supply 2009 .
- the computing device 2000 also includes one or more sensors 2010 .
- the one or more sensors 2010 include, but are not limited to: an acceleration sensor 2011 , a gyro sensor 2012 , a pressure sensor 2013 , a fingerprint sensor 2014 , an optical sensor 2015 and a proximity sensor 2016 .
- FIG. 20 does not constitute a limitation to the computer device 2000, and may include more or less components than shown in the figure, or combine some components, or adopt a different component arrangement.
- a computer storage medium at least one piece of program code is stored in the computer-readable storage medium, and the program code is loaded and executed by a processor to realize the above-mentioned sound prompting method in a virtual world .
- a computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
- the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the above-mentioned voice prompt method in the virtual world.
- the program can be stored in a computer-readable storage medium.
- the above-mentioned The storage medium mentioned may be a read-only memory, a magnetic disk or an optical disk, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Acoustics & Sound (AREA)
- Radar, Positioning & Navigation (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
颜色 | 声音类型 |
白色 | 脚步声 |
红色 | 枪声 |
橙色 | 炸弹声 |
蓝色 | 喊叫声 |
图标样式 | 声音类型 |
脚印图标样式 | 脚步声 |
枪械图标样式 | 枪声 |
火焰图标样式 | 炸弹声 |
人头图标样式 | 喊叫声 |
Claims (22)
- 一种虚拟世界中的声音提示方法,所述方法由终端执行,所述方法包括:显示第一虚拟角色的视角画面,所述视角画面上显示有罗盘信息,所述罗盘信息中包括方位刻度序列,所述方位刻度序列中的方位刻度用于指示所述第一虚拟角色在虚拟世界中面向的水平方位;控制所述第一虚拟角色在所述虚拟世界中活动;在所述第一虚拟角色在所述虚拟世界中活动的过程中,若在所述第一虚拟角色的周围环境中存在第一声源,基于所述方位刻度序列中的第一方位刻度显示第一声音指示器,所述第一声音指示器用于指示所述第一声源对应的水平方位和垂直方位。
- 根据权利要求1所述的方法,所述基于所述方位刻度序列中的第一方位刻度显示第一声音指示器,包括:基于所述方位刻度序列中的所述第一方位刻度显示具有视觉表现的第一声音指示器,所述第一声音指示器的中心位置与所述第一方位刻度对齐,所述第一方位刻度用于指示所述第一声源的水平方位,所述第一声音指示器的视觉表现用于指示所述第一声源的垂直方位。
- 根据权利要求2所述的方法,所述第一声音指示器的视觉表现包括第一视觉表现,所述第一视觉表现用于指示所述第一声源的垂直方位,所述第一视觉表现包括如下至少一种:所述第一声音指示器的形状;所述第一声音指示器中的垂直方位刻度;所述第一声音指示器中的箭头;所述第一声音指示器中的文字提示。
- 根据权利要求3所述的方法,所述垂直方位包括:上部方位、中部方位和下部方位;所述基于所述方位刻度序列中的所述第一方位刻度显示具有视觉表现的第一声音指示器,包括:基于所述方位刻度序列中的所述第一方位刻度显示呈向上形状的第一声音指示器,所述第一声音指示器用于指示所述第一声源的垂直方位为所述上部方位;或,基于所述方位刻度序列中的所述第一方位刻度显示呈上下对称形状的第一声音指示器,所述第一声音指示器用于指示所述第一声源的垂直方位为所述中部方位;或,基于所述方位刻度序列中的所述第一方位刻度显示呈向下形状的第一声音指示器,所述第一声音指示器用于指示所述第一声源的垂直方位为所述下部方位。
- 根据权利要求3所述的方法,所述基于所述方位刻度序列中的所述第一方位刻度显示具有视觉表现的第一声音指示器,包括:基于所述方位刻度序列中的所述第一方位刻度显示具有所述垂直方位刻度的第一声音指示器,所述垂直方位刻度用于指示所述第一声源相对于所述第一虚拟角色的俯仰角;或,基于所述方位刻度序列中的所述第一方位刻度显示具有所述箭头的第一声音指示器,所述箭头的箭头方向用于指示所述第一声源的垂直方位;或,基于所述方位刻度序列中的所述第一方位刻度显示具有所述文字提示的第一声音指示器,所述文字提示用于提示所述第一声源的垂直方位。
- 根据权利要求3所述的方法,所述第一声源用于发出第一声音,所述第一声音指示器的视觉表现还包括其他视觉表现,其中,所述第一视觉表现和所述其它视觉表现是不同类型的视觉表现,所述其它视觉表现包括如下视觉表现中的至少一种:用于指示所述第一声音的声音类型的第二视觉表现;用于指示所述第一声音的声音大小的第三视觉表现;用于指示所述第一声音的声音距离的第四视觉表现;用于指示所述第一声音的动作频率的第五视觉表现。
- 根据权利要求6所述的方法,所述方法还包括:根据所述第一声音的声音参数,确定所述第一声音指示器的视觉表现。
- 根据权利要求7所述的方法,所述声音参数包括声音类型,所述根据所述第一声音的声音参数,确定所述第一声音指示器的视觉表现,包括:根据所述第一声音的声音类型,确定所述第一声音指示器的所述第二视觉表现。
- 根据权利要求8所述的方法,所述第二视觉表现包括所述第一声音指示器的颜色或者所述第一声音指示器的图标样式。
- 根据权利要求7所述的方法,所述第一声音指示器采用声波幅度谱表示,所述第三视觉表现包括所述第一声波幅度谱的幅度大小,所述声音参数包括声音大小,所述根据所述第一声音的声音参数,确定所述第一声音指示器的视觉表现,包括:根据所述第一声音在所述第一虚拟角色处的到达声音大小,确定所述第一声波幅度谱的幅度大小。
- 根据权利要求10所述的方法,所述方法还包括:根据所述第一声音的原始声音大小和影响参数,确定所述第一声音的所述到达声音大小,所述影响参数包括如下参数中的至少一种:所述第一声源与所述第一虚拟角色之间的距离;所述第一虚拟角色佩戴的装备;在所述第一声源为第二虚拟角色的情况下,所述第二虚拟角色佩戴的装备;所述第一声源的材质或所述第一声源接触到的材质。
- 根据权利要求11所述的方法,针对同一个所述原始声音大小,在所述第一虚拟角色佩戴有耳机的情况确定的所述到达声音大小大于在所述第一虚拟角色未佩戴有耳机的情况确定的所述到达声音大小;或,针对同一个所述原始声音大小,在所述第一虚拟角色佩戴有头盔的情况确定的所述到达声音大小小于在所述第一虚拟角色未佩戴有头盔的情况确定的所述到达声音大小;或,针对同一个所述原始声音大小,在所述第二虚拟角色佩戴有消声器的情况确定的所述到达声音大小小于在所述第一虚拟角色未佩戴有消声器的情况确定的所述到达声音大小。
- 根据权利要求7所述的方法,所述第四视觉表现包括所述第一声音指示器的开始显示时间,所述声音参数包括声音传播速度,所述根据所述第一声音的声音参数,确定所述第一声音指示器的视觉表现,包括:基于所述第一声源和所述第一虚拟角色之间的声音传播速度,确定所述第一声音指示器的开始显示时间,所述开始显示时间晚于所述第一声音的产生时间。
- 根据权利要求7所述的方法,所述第一声音指示器采用声波幅度谱表示,所述第五视觉表现包括所述声波幅度谱的抖动频率,所述声音参数包括所述第一声源的动作频率,所述根据所述第一声音的声音参数,确定所述第一声音指示器的视觉表现,包括:根据所述第一声源发出所述第一声音时的动作频率,确定所述声波幅度谱的抖动频率。
- 根据权利要求1至14任一所述的方法,所述第一声音指示器的视觉表现包括:形状、图案、颜色、纹理、文字、动画效果、开始显示时间、持续显示时间、消隐时间中的至少一种。
- 根据权利要求1至14任一所述的方法,所述罗盘信息中的所述方位刻度序列对应所述第一虚拟角色的可见方位范围,所述方法还包括:若所述第一虚拟角色的周围环境中存在第二声源且所述第二声源的水平方位位于所述可见方位范围之外,基于所述方位刻度序列中与所述第二声源的水平方位最近的边缘方位刻度显示第二声音指示器,所述第二声音指示器用于表示沿所述边缘方位刻度所指示的水平方位上存在所述第二声源。
- 根据权利要求1至14任一所述的方法,所述方法还包括:在所述第一虚拟角色进入失聪状态的情况下,取消显示所述第一声音提示器。
- 根据权利要求1至14任一所述的方法,所述方法还包括:在所述第一声源发出至少两个声音且所述至少两个声音的生成时间差小于阈值的情况下,将所述至少两个声音中音量最大的声音确定为所述第一声音。
- 一种虚拟世界中的声音提示装置,所述装置部署在终端上,所述装置包括:显示模块,用于显示第一虚拟角色的视角画面,所述视角画面上显示有罗盘信息,所述罗盘信息中包括方位刻度序列,所述方位刻度序列中的方位刻度用于指示所述第一虚拟角色在虚拟世界中面向的水平方位;控制模块,用于控制所述第一虚拟角色在所述虚拟世界中活动;所述显示模块,还用于在所述第一虚拟角色在所述虚拟世界中活动的过程中,若在所述第一虚拟角色的周围环境中存在第一声源,基于所述方位刻度序列中的第一方位刻度显示第一声音指示器,所述第一声音指示器用于指示所述第一声源对应的水平方位和垂直方位。
- 一种计算机设备,所述计算机设备包括:处理器和存储器,所述存储器中存储有至少一段程序,所述至少一段程序由所述处理器加载并执行以实现如权利要求1至18中任一项所述的虚拟世界中的声音提示方法。
- 一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一段程序,所述至少一段程序由处理器加载并执行以实现如权利要求1至18中任一项所述的虚拟世界中的声音提示方法。
- 一种计算机程序产品,当所述计算机程序产品被执行时,使得所述处理器实现如权利要求1至18中任一项所述的虚拟世界中的声音提示方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/322,031 US20230285859A1 (en) | 2021-08-05 | 2023-05-23 | Virtual world sound-prompting method, apparatus, device and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110898406.6 | 2021-08-05 | ||
CN202110898406.6A CN115703011A (zh) | 2021-08-05 | 2021-08-05 | 虚拟世界中的声音提示方法、装置、设备及存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/322,031 Continuation US20230285859A1 (en) | 2021-08-05 | 2023-05-23 | Virtual world sound-prompting method, apparatus, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023011063A1 true WO2023011063A1 (zh) | 2023-02-09 |
Family
ID=85155163
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/102593 WO2023011063A1 (zh) | 2021-08-05 | 2022-06-30 | 虚拟世界中的声音提示方法、装置、设备及存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20230285859A1 (zh) |
CN (1) | CN115703011A (zh) |
WO (1) | WO2023011063A1 (zh) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107890673A (zh) * | 2017-09-30 | 2018-04-10 | 网易(杭州)网络有限公司 | 补偿声音信息的视觉显示方法及装置、存储介质、设备 |
CN108854069A (zh) * | 2018-05-29 | 2018-11-23 | 腾讯科技(深圳)有限公司 | 音源确定方法和装置、存储介质及电子装置 |
US20190076739A1 (en) * | 2017-09-12 | 2019-03-14 | Netease (Hangzhou) Network Co.,Ltd. | Information processing method, apparatus and computer readable storage medium |
TW201931354A (zh) * | 2018-01-05 | 2019-08-01 | 美律實業股份有限公司 | 用於音頻成像的可穿戴式電子裝置及其操作方法 |
CN113559504A (zh) * | 2021-04-28 | 2021-10-29 | 网易(杭州)网络有限公司 | 信息处理方法、装置、存储介质及电子设备 |
-
2021
- 2021-08-05 CN CN202110898406.6A patent/CN115703011A/zh active Pending
-
2022
- 2022-06-30 WO PCT/CN2022/102593 patent/WO2023011063A1/zh active Application Filing
-
2023
- 2023-05-23 US US18/322,031 patent/US20230285859A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190076739A1 (en) * | 2017-09-12 | 2019-03-14 | Netease (Hangzhou) Network Co.,Ltd. | Information processing method, apparatus and computer readable storage medium |
CN107890673A (zh) * | 2017-09-30 | 2018-04-10 | 网易(杭州)网络有限公司 | 补偿声音信息的视觉显示方法及装置、存储介质、设备 |
TW201931354A (zh) * | 2018-01-05 | 2019-08-01 | 美律實業股份有限公司 | 用於音頻成像的可穿戴式電子裝置及其操作方法 |
CN108854069A (zh) * | 2018-05-29 | 2018-11-23 | 腾讯科技(深圳)有限公司 | 音源确定方法和装置、存储介质及电子装置 |
CN113559504A (zh) * | 2021-04-28 | 2021-10-29 | 网易(杭州)网络有限公司 | 信息处理方法、装置、存储介质及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN115703011A (zh) | 2023-02-17 |
US20230285859A1 (en) | 2023-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111481932B (zh) | 虚拟对象的控制方法、装置、设备和存储介质 | |
JP7476235B2 (ja) | 仮想オブジェクトの制御方法、装置、デバイス及びコンピュータプログラム | |
CN110548288B (zh) | 虚拟对象的受击提示方法、装置、终端及存储介质 | |
CN110585710B (zh) | 互动道具控制方法、装置、终端及存储介质 | |
CN110478895B (zh) | 虚拟物品的控制方法、装置、终端及存储介质 | |
CN110585712A (zh) | 在虚拟环境中投掷虚拟爆炸物的方法、装置、终端及介质 | |
CN112076467B (zh) | 控制虚拟对象使用虚拟道具的方法、装置、终端及介质 | |
CN111744186B (zh) | 虚拟对象的控制方法、装置、设备及存储介质 | |
JP2022539288A (ja) | 仮想オブジェクトの制御方法、装置、機器及びコンピュータプログラム | |
JP2022539289A (ja) | 仮想オブジェクト照準方法、装置及びプログラム | |
US20230013014A1 (en) | Method and apparatus for using virtual throwing prop, terminal, and storage medium | |
CN110585706B (zh) | 互动道具控制方法、装置、终端及存储介质 | |
CN110507990B (zh) | 基于虚拟飞行器的互动方法、装置、终端及存储介质 | |
US20220379214A1 (en) | Method and apparatus for a control interface in a virtual environment | |
CN112076469A (zh) | 虚拟对象的控制方法、装置、存储介质及计算机设备 | |
CN113559504B (zh) | 信息处理方法、装置、存储介质及电子设备 | |
US20230072503A1 (en) | Display method and apparatus for virtual vehicle, device, and storage medium | |
CN113041622A (zh) | 虚拟环境中虚拟投掷物的投放方法、终端及存储介质 | |
CN112044084A (zh) | 虚拟环境中的虚拟道具控制方法、装置、存储介质及设备 | |
CN111359206A (zh) | 虚拟对象的控制方法、装置、终端和存储介质 | |
CN113713383A (zh) | 投掷道具控制方法、装置、计算机设备及存储介质 | |
CN114130031A (zh) | 虚拟道具的使用方法、装置、设备、介质及程序产品 | |
CN117101139A (zh) | 游戏中的信息处理方法、装置、存储介质及电子装置 | |
CN112221135A (zh) | 画面显示方法、装置、设备以及存储介质 | |
WO2023011063A1 (zh) | 虚拟世界中的声音提示方法、装置、设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22851778 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2301006963 Country of ref document: TH |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11202307122R Country of ref document: SG |
|
ENP | Entry into the national phase |
Ref document number: 2023571572 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |