WO2019205881A1 - 虚拟环境中的信息显示方法、装置、设备及存储介质 - Google Patents

虚拟环境中的信息显示方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2019205881A1
WO2019205881A1 PCT/CN2019/080125 CN2019080125W WO2019205881A1 WO 2019205881 A1 WO2019205881 A1 WO 2019205881A1 CN 2019080125 W CN2019080125 W CN 2019080125W WO 2019205881 A1 WO2019205881 A1 WO 2019205881A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
sound effect
virtual
distance
coordinate
Prior art date
Application number
PCT/CN2019/080125
Other languages
English (en)
French (fr)
Inventor
胡申洋
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2019205881A1 publication Critical patent/WO2019205881A1/zh
Priority to US16/904,884 priority Critical patent/US11458395B2/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/847Cooperative playing, e.g. requiring coordinated actions from several players to achieve a common goal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/529Depth or shape recovery from texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Definitions

  • the present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for displaying information in a virtual environment.
  • the virtual environment provided by the application can be used for activities by one or more virtual characters, including moving, jumping, attacking, releasing skills, and the like.
  • the sound effect playing method in the virtual environment includes: the server acquires the behavior state of each virtual object in the same virtual environment; determines the sound effect of each virtual object according to the behavior state of each virtual object; and sends a play instruction to the terminal,
  • the play instruction is used to instruct the terminal to play the sound effect corresponding to each virtual object. For example, if the avatar A is in an attack state, the terminal plays the attack sound effect; for example, if the avatar B is in the release skill state, the terminal plays the sound effect of releasing the skill.
  • the embodiment of the present application provides a method, an apparatus, a device, and a storage medium for displaying information in a virtual environment.
  • the embodiment of the present application provides a method for displaying information in a virtual environment, where the method includes:
  • the embodiment of the present application provides a method for displaying information in a virtual environment, where the method includes:
  • the sound effect display instruction is an instruction sent by the server after obtaining the sound effect intensity, and the sound effect intensity is obtained by the server according to a first distance between the first virtual object and the second virtual object.
  • the first distance is obtained by the server, after acquiring the second coordinate of the second virtual object located in the first orientation according to the first coordinate, according to the first coordinate and the second coordinate owned.
  • the embodiment of the present application provides an information display apparatus in a virtual environment, where the apparatus includes:
  • An acquiring module configured to acquire a first coordinate of the first virtual object in the virtual environment, and acquire, according to the first coordinate, a first behavior in a first orientation of the first virtual object The second coordinate of the second virtual object;
  • a processing module configured to calculate a first distance between the first virtual object and the second virtual object according to the first coordinate and the second coordinate; acquire the second virtual according to the first distance The intensity of the sound of the object in the first orientation;
  • a sending module configured to send a sound effect display instruction to the first terminal corresponding to the first virtual object, where the sound effect display instruction is used to indicate that the first terminal is centered on the first virtual object in the virtual environment
  • a sound effect indication pattern is displayed, the sound effect indication pattern being used to indicate that the second virtual object exists in a location area that is negatively correlated with the sound effect intensity in the first orientation.
  • the embodiment of the present application provides an information display apparatus in a virtual environment, where the apparatus includes:
  • a sending module configured to send, to the server, a first coordinate of the first virtual object in the virtual environment
  • a receiving module configured to receive a sound effect display instruction sent by the server
  • a display module configured to display, according to the sound effect display instruction, a sound effect indication pattern centering on the first virtual object in the virtual environment, the sound effect indication pattern being used to represent the first along the first virtual object a second virtual object in a preset behavior state in a positional area having a negative correlation with the sound intensity;
  • the sound effect display instruction is an instruction sent by the server after obtaining the sound effect intensity, and the sound effect intensity is obtained by the server according to a first distance between the first virtual object and the second virtual object.
  • the first distance is obtained by the server, after acquiring the second coordinate of the second virtual object located in the first orientation according to the first coordinate, according to the first coordinate and the second coordinate owned.
  • an embodiment of the present application provides an electronic device, where the electronic device includes a processor and a memory, where the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the foregoing.
  • the information display method in the virtual environment includes a processor and a memory, where the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the foregoing.
  • embodiments of the present application provide a computer readable storage medium having at least one instruction stored in a processor and executed by a processor to implement the virtual environment as described above Information display method.
  • FIG. 1 is a schematic diagram of an implementation environment of an information display method in a virtual environment provided by an exemplary embodiment of the present application
  • FIG. 2 is a flowchart of a method for displaying an information in a virtual environment provided by an exemplary embodiment of the present application
  • FIG. 3 is a schematic diagram of a calibration direction of a virtual object according to an exemplary embodiment of the present application.
  • FIG. 4 is a schematic diagram of a sound effect indication pattern provided by an exemplary embodiment of the present application.
  • FIG. 5 is a schematic diagram of a sound effect indication pattern provided by an exemplary embodiment of the present application.
  • FIG. 6 is a schematic diagram of a sound effect indication pattern provided by an exemplary embodiment of the present application.
  • FIG. 7 is a schematic diagram of a sound effect indication pattern provided by an exemplary embodiment of the present application.
  • FIG. 8 is a schematic diagram of a sound effect indication pattern provided by an exemplary embodiment of the present application.
  • FIG. 9 is a flowchart of a method for displaying an information in a virtual environment provided by an exemplary embodiment of the present application.
  • FIG. 10 is a schematic diagram of a preset distance range provided by an exemplary embodiment of the present application.
  • FIG. 11 is a schematic diagram of a preset distance range provided by an exemplary embodiment of the present application.
  • FIG. 12 is a schematic diagram of a preset distance range provided by an exemplary embodiment of the present application.
  • FIG. 13 is a flowchart of a method for displaying an information in a virtual environment provided by an exemplary embodiment of the present application.
  • FIG. 14 is a schematic diagram of a display interface of an information display method in a virtual environment provided by an exemplary embodiment of the present application.
  • 15 is a flowchart of a method for displaying an information in a virtual environment provided by an exemplary embodiment of the present application.
  • 16 is a structural block diagram of an information display apparatus in a virtual environment provided by an exemplary embodiment of the present application.
  • FIG. 17 is a structural block diagram of an information display apparatus in a virtual environment provided by an exemplary embodiment of the present application.
  • FIG. 18 is a structural block diagram of an electronic computer device according to an exemplary embodiment of the present application.
  • FIG. 19 is a structural block diagram of a terminal provided by an exemplary embodiment of the present application.
  • Virtual environment A virtual environment provided when an application runs on a terminal.
  • the virtual environment can be a real-world simulation environment, a semi-simulated semi-fiction environment, or a purely fictitious environment.
  • the virtual environment can be a two-dimensional virtual environment or a three-dimensional virtual environment.
  • Virtual object refers to an active object in a virtual environment.
  • the movable object may be a virtual character, a virtual creature, an anime character, or the like.
  • the virtual object is a three-dimensional model created based on animated bone techniques.
  • Each virtual object has its own shape and volume in the virtual environment, occupying a part of the space in the virtual environment.
  • Top view angle refers to the perspective of the virtual environment from a bird's eye view.
  • the virtual environment is viewed at an angle of 45 degrees.
  • the observed camera is located above the virtual environment, and the virtual environment is viewed from a top view.
  • Coordinate is the reference point coordinate value of each virtual object in the virtual environment, and the reference point may be a preset pixel point on the head, shoulder, foot or chest of the virtual object.
  • the coordinates of the virtual object are (X, Y), where X represents the abscissa of the virtual object in the virtual environment, and Y represents the ordinate of the virtual object in the virtual environment.
  • the coordinates of the virtual object are (X, Y, Z), where X usually represents the coordinates in the east-west direction along the virtual environment ground plane, and Y usually represents the north-south direction along the virtual environment ground plane.
  • the coordinates, Z usually represent the coordinates in the vertical direction along the ground plane of the virtual environment.
  • Sound effect indication pattern refers to a pattern that images sound effects in the vicinity of a virtual object (such as on the surrounding ground) centering on the virtual object.
  • the sound effect indication pattern can be an abstracted waveform image.
  • the terminal simultaneously plays the sound effect of each virtual object in the same virtual environment, and the user cannot judge the distance between each virtual object according to the sound effect of each virtual object, thereby causing the virtual environment to have poor realism.
  • the technical solution provided by the embodiment of the present application calculates a first distance between the first virtual object and the second virtual object by acquiring coordinates of the second virtual object of the first virtual object in the first orientation, and determines a sound effect according to the first distance. Intensity, according to the intensity of the sound, the sound effect indication pattern is displayed centering on the first virtual object, and the second virtual object is present in the position area indicating that the sound effect indication pattern is negatively correlated with the sound intensity, so that the second virtual object can be visually displayed. The distance between the object and the first virtual object increases the realism of the virtual environment.
  • FIG. 1 is a schematic diagram of an implementation environment of an information display method in a virtual environment provided by an exemplary embodiment of the present application.
  • the implementation environment includes a first terminal 110, a second terminal 120, and a server 130.
  • the first terminal 110 and the server 130 establish a communication connection through a wired or wireless network
  • the second terminal 120 passes through a wired or wireless network and a server. 130 establishes a communication connection.
  • the server 130 is communicatively coupled to at least two terminals
  • the first terminal 110 and the second terminal 120 refer to at least two terminals in the drawing.
  • the user manipulates the first virtual object in the virtual environment through the first terminal 110, and the first terminal 110 sends the first behavior state and the first coordinate of the first virtual object to the server.
  • the user manipulates the second virtual object in the virtual environment through the second terminal 110, and the second terminal 120 sends the second behavior state and the second coordinate of the second virtual object to the server.
  • the server 130 detects, according to the first coordinate, whether the second behavior state of the second virtual object located in the first orientation of the first virtual object is a preset behavior state, and if the second behavior state is a preset behavior state, according to the first
  • the coordinate and the second coordinate calculate a first distance between the first virtual object and the second virtual object; acquire a sound effect intensity of the second virtual object in the first direction according to the first distance; and send a sound effect display instruction to the first terminal 110.
  • the terminal 110 receives the sound effect display instruction sent by the server 130, and displays a sound effect indication pattern centering on the first virtual object in the virtual environment according to the sound effect indication instruction, wherein the sound effect indication pattern is used to indicate that the sound effect intensity is negatively correlated along the first direction. There is a second virtual object in the location area.
  • the virtual environment may be a multiplayer online game in a top view
  • the first virtual object is a first hero controlled by a first player in a multiplayer online game
  • the second virtual object is a multiplayer online game.
  • a second hero character controlled by the second player when the second hero character approaches the first hero character, displaying a sound effect indication pattern centering on the first hero character, the sound effect indication pattern pointing to a position where the second hero character is located, the sound effect indication At least one of the size, the area, the outline width, and the number of lines of the pattern is positively correlated with the distance between the first hero character and the second hero character.
  • FIG. 2 illustrates a method flowchart of an information display method in a virtual environment provided by an exemplary embodiment of the present application.
  • the method is applicable to the server 130 in the implementation environment as shown in FIG. 1, the method comprising:
  • Step 201 Acquire a first coordinate of the first virtual object in the virtual environment.
  • the first terminal corresponding to the first virtual object sends a first behavior state of the first virtual object to the server, and a first coordinate of the first virtual object in the virtual environment, and the server receives the first behavior state and the first coordinate.
  • the first terminal sends the first behavior state and the first coordinate to the server every preset time interval; or the first terminal sends the first behavior state and the first coordinate to the server when determining that the first behavior state changes. .
  • Step 202 Acquire, according to the first coordinate, a second coordinate of the second virtual object that is in a preset behavior state in a first orientation of the first virtual object.
  • the server detects, according to the first coordinate, whether there is a second virtual object in a preset behavior state in the first orientation of the first virtual object, and if the second virtual object in the preset behavior state exists in the first orientation, the acquiring The second coordinate of the second virtual object.
  • the behavior state of the second virtual object is sent by the second terminal corresponding to the second virtual object to the server.
  • the server takes the first coordinate 300 as a center, determines a circular detection area by using a preset distance as a radius, and divides the detection area into the same area by a straight line passing through the first coordinate.
  • Each of the regions 310, 320, 330, 340, 350, 360, 370, 380 has an orientation, and the first orientation is any one of the plurality of regions.
  • the first orientation is the area 310, and the server detects whether there is a second virtual object in the preset behavior state in the area 310. If the second virtual object in the preset behavior state exists in the area 310, the second virtual object is obtained.
  • the second coordinate is the area 310, and the server detects whether there is a second virtual object in the preset behavior state in the area 310. If the second virtual object in the preset behavior state exists in the area 310, the second virtual object is obtained. The second coordinate.
  • the step of acquiring the second coordinates is performed.
  • the server determines whether the first virtual object and the second virtual object belong to the same camp. If the first virtual object and the second virtual object belong to the same camp, the step of acquiring the second coordinate is performed; if the first virtual object and the second virtual object are not If they belong to the same camp, the steps stop.
  • the camp refers to the team to which the hero controlled by the player belongs in the multiplayer online competitive game.
  • the first hero character belongs to the red team
  • the second hero character belongs to the blue team
  • the red team and the blue team are hostile relations
  • the first hero character and the second hero character do not belong to the same camp
  • the first hero character belongs to the red team
  • the second hero belongs to the yellow team.
  • the red team and the yellow team are alliances.
  • the first hero and the second hero belong to the same camp. If the first hero and the second hero belong to the red team, the first The hero and the second hero belong to the same camp.
  • the preset behavior state is a behavior state of a preset virtual object in the terminal, for example, the preset behavior state includes at least one of a movement, an attack, and a skill release of the virtual object.
  • Step 203 Calculate a first distance between the first virtual object and the second virtual object according to the first coordinate and the second coordinate.
  • the server calculates the first virtual object and the second according to the first coordinate and the second coordinate.
  • the first distance between virtual objects is
  • the server calculates the first virtual object according to the first coordinate and the second coordinate.
  • the first distance between the second virtual object and the second virtual object is
  • Step 204 Acquire a sound effect intensity of the second virtual object in the first orientation according to the first distance.
  • the server acquires the sound intensity of the second virtual object in the first direction according to the first distance.
  • the sound intensity is used to indicate the intensity of the sound effect applied by the preset behavior state of the second virtual object to the first virtual object, and the sound intensity is inversely proportional to the first distance.
  • the preset behavior state is moving
  • the second virtual object generates a footstep sound during the movement, and the smaller the first distance between the second virtual object and the first virtual object, the greater the sound intensity generated by the footstep sound.
  • the greater the first distance between the second virtual object and the first virtual object the smaller the sound intensity produced by the footstep sound.
  • the server stores a first correspondence between the distance and the sound intensity, and the server may query, in the first correspondence, the sound intensity corresponding to the first distance according to the first distance.
  • Step 205 Send a sound effect display instruction to the first terminal corresponding to the first virtual object.
  • the server After obtaining the sound intensity, the server sends a sound effect display instruction to the first terminal, and the sound effect display instruction carries the sound intensity.
  • the first terminal After receiving the sound effect display instruction, the first terminal displays the sound effect indication pattern centering on the first virtual object, and the sound effect indication pattern is used to indicate that the second virtual object exists in the position area that is negatively correlated with the sound effect intensity in the first direction. .
  • the sound effect display instruction includes a sound intensity
  • the first terminal determines a pattern parameter of the sound effect indication pattern according to the sound intensity
  • the pattern parameter includes at least one of a size, an area, a contour width, and a number of lines.
  • the first terminal prestores a second correspondence between the sound intensity and the pattern parameter, and the first terminal queries the second correspondence according to the sound intensity to determine the sound effect indication pattern parameter.
  • the first terminal is displayed in the virtual environment 400 with the sound effect indication pattern 420 centered on the first virtual object 410, and the pattern parameter of the sound effect indication pattern is a wide outline.
  • the pattern parameter is a pattern parameter corresponding to the first intensity.
  • the first terminal is displayed in the virtual environment 400 with the sound effect indication pattern 420 centered on the first virtual object 410, and the pattern parameter of the sound effect indication pattern is a medium profile.
  • the pattern parameter is a pattern parameter corresponding to the second intensity.
  • the first terminal is displayed in the virtual environment 400 with the sound effect indication pattern 420 centered on the first virtual object 410 , and the pattern parameter of the sound effect indication pattern is a narrow outline.
  • the pattern parameter is a pattern parameter corresponding to the third intensity.
  • the sound effect display instruction further carries a preset behavior state of the second virtual object, so that the first terminal determines the pattern parameter of the sound effect indication pattern according to the preset behavior state.
  • the third terminal pre-stores a third correspondence between the preset behavior state and the pattern type, and the first terminal queries the third correspondence according to the preset behavior state to determine the pattern type of the sound effect indication pattern.
  • the first terminal After receiving the sound effect display instruction, the first terminal determines the pattern type of the sound effect indication pattern according to the preset behavior state; and/or determines the pattern parameter of the sound effect indication pattern according to the sound effect intensity, and the pattern parameter includes size, area, contour width, and texture At least one of the numbers.
  • the corresponding pattern type is a waveform pattern
  • the pattern parameter corresponding to the first intensity is a wide contour width
  • the sound effect indication pattern 510 displayed by the terminal in the virtual environment 500 centering on the first virtual object 410 is a wave pattern
  • the contour width of the wave pattern is wide.
  • the pattern type corresponding to the release skill is a fan-shaped pattern
  • the pattern parameter corresponding to the second intensity is a narrow contour width
  • the first terminal is The sound effect indication pattern 510 displayed centering on the first virtual object 410 in the virtual environment 500 is a fan-shaped pattern having a narrow outline width.
  • the first distance between the first virtual object and the second virtual object is calculated by acquiring the coordinates of the second virtual object of the first virtual object in the first orientation, according to the first The distance determines the sound intensity, and according to the sound intensity, the sound effect indication pattern is displayed centering on the first virtual object, and the sound effect indication pattern is used to indicate that the second virtual object exists in the positional area having a negative correlation with the sound intensity, so that the sound can be displayed visually
  • the distance between the second virtual object and the first virtual object increases the realism of the virtual environment.
  • the first terminal displays the sound effect indication pattern centering on the first virtual object in the virtual environment, the sound effect indication pattern of the second virtual object of the same camp is caused to interfere with the user, and the information display efficiency in the virtual environment is improved.
  • FIG. 9 is a flowchart of a method for displaying information in a virtual environment provided by an exemplary embodiment of the present application.
  • the method can be applied to an implementation environment as shown in FIG. 1, the method comprising:
  • Step 601 The first terminal sends the first coordinate of the first virtual object in the virtual environment to the server at a preset time interval.
  • the first terminal sends the first coordinate of the first virtual object in the virtual environment to the server at a preset time interval through a wired or wireless network.
  • the first terminal detects whether the first virtual object is in a preset behavior state, and when the first virtual object is in the preset behavior state, sending, to the server, the preset behavior state and the first coordinate where the first virtual object is located; When the first virtual object is not in the preset behavior state, the first terminal sends the first coordinate to the server.
  • Step 602 The second terminal sends a preset state of the second virtual object to the server at a preset time interval, and a second coordinate of the second virtual object in the virtual environment.
  • the second terminal detects whether the second virtual object is in a preset behavior state, and when the second virtual object is in the preset behavior state, sends the preset of the second virtual object to the server at a preset time interval through the wired or wireless network. a behavior state, and a second coordinate of the second virtual object in the virtual environment; when the second virtual object is not in the preset behavior state, the second terminal sends the second coordinate to the server.
  • Step 603 The server detects, according to the first coordinate, that the first coordinate is a center of the circle, and the preset distance is a candidate virtual object that is in a preset behavior state and does not belong to the same camp as the first virtual object.
  • the server detects, according to the first coordinate, whether the candidate virtual object exists in the detection area with the first coordinate as the center and the preset distance as the radius.
  • the candidate virtual object is a virtual object that is in a preset behavior state and does not belong to the same camp as the first virtual object.
  • the preset distance can be set according to your needs. Except for the first display, as shown in FIG. As shown in FIG. 11 , with the first coordinate 700 as the center, the detection area 710 with the preset distance r2 as the radius is partially located in the display area 720 of the virtual environment, and the detection area 710 does not cover all the display areas 720; The detection area 710 with the preset distance r3 as a radius is partially located in the display area 720 of the virtual environment, and the detection area 710 covers the entire display area 720.
  • Step 604 When there are at least two candidate virtual objects in the first orientation in the detection area, the server acquires coordinates of the at least two candidate virtual objects.
  • the server detects that there are at least two candidate virtual objects in the first orientation of the first coordinate, and acquires coordinates of the at least two candidate virtual objects.
  • the coordinates of the candidate virtual object are sent to the server by the candidate virtual object corresponding terminal through a wired or wireless network.
  • Step 605 The server calculates a distance between each candidate virtual object and the first virtual object according to the coordinates of the candidate virtual object and the first coordinate.
  • the server calculates a distance between the at least two candidate virtual objects and the first virtual object according to coordinates of the at least two candidate virtual objects and first coordinates of the first virtual object.
  • a method for the server to calculate the distance between the candidate virtual object and the first virtual object reference may be made to step 203 in the embodiment of FIG. 2, and details are not described herein.
  • Step 606 The server uses the candidate object with the smallest distance as the second virtual object, and the distance between the second virtual object and the first virtual object as the first distance.
  • the server takes the smallest candidate in the distance as the second virtual object and the distance between the second virtual object and the first virtual object as the first distance according to the calculated distance.
  • Step 607 The server determines a target distance range in which the first distance is within n preset distance ranges.
  • the server determines a target distance range in which the first distance is within n preset distance ranges.
  • the n preset distance ranges are distance ranges that do not overlap each other and are adjacent to each other, and n is a natural number, n ⁇ 2.
  • the first distance is 7100 distance units
  • the server stores three preset distance ranges that are connected end to end, respectively, a first distance range (0 distance units to 7000 distance units), and a second distance range ( 7000 distance units to 13999 distance units) and a third distance range (13999 distance units to 21000 distance units), the server determines that the first distance belongs to the second distance range, so the second distance range is the target distance range.
  • Step 608 The server determines, according to the target distance range, a sound effect intensity of the second virtual object in the first orientation.
  • the server determines the sound effect intensity of the second virtual object in the first orientation according to the target distance range of the first distance within the n preset distance ranges.
  • the preset distance range includes two distance ranges, the first preset distance range is 0 to the first distance threshold, and the second preset distance range is the first distance threshold to the preset distance.
  • the server first determines, according to the first distance, whether the first distance is smaller than the first distance threshold, and if the first distance is smaller than the first distance threshold, falls into the first preset distance range, and determines the sound intensity of the second virtual object in the first direction. a first intensity; if the first distance is not less than the first distance threshold, falling within the second preset distance range, determining that the sound intensity of the second virtual object in the first orientation is the second intensity, wherein the first intensity is greater than Second intensity.
  • the preset distance range includes three distance ranges, the first preset distance range is 0 to the first distance threshold, and the second preset distance range is the first distance threshold to the second distance threshold, and the third preset distance The range is the second distance threshold to a preset distance.
  • the server first determines, according to the first distance, whether the first distance is smaller than the first distance threshold, and if the first distance is smaller than the first distance threshold, falls into the first preset distance range, and determines the sound intensity of the second virtual object in the first direction.
  • the first distance is not less than the first distance threshold, determine whether the first distance is smaller than the second distance threshold, and if the first distance is less than the second distance threshold, fall within the second preset distance range, determine the first The sound intensity of the second virtual object in the first direction is the second intensity; if the first distance is not less than the second distance threshold, falling within the third preset distance range, determining the sound intensity of the second virtual object in the first orientation It is a third intensity, wherein the first intensity is greater than the second intensity, and the second intensity is greater than the third intensity.
  • the server stores three preset distance ranges that are connected end to end, respectively, a first distance range (0 distance units to 7000 distance units), and a second distance range (7000 distance units to 13999 distance units). And the third distance range (13999 distance units to 21000 distance units).
  • the server calculates the first distance, it is first determined whether the first distance is within the first distance range, and if the first distance is within the first distance range, determining that the second virtual object has a strong sound intensity in the first direction; If the first distance is not within the first distance range, determining whether the first distance is within the second distance range, and if the first distance is within the second distance range, determining that the second virtual object is in the first direction If the first distance is not within the second distance range, determining whether the first distance is within the third range, and if the first distance is within the third range, determining that the second virtual object has a weak sound intensity in the first direction If the first distance exceeds the third range, all the coordinates involved in the operation are deleted.
  • Step 609 The server sends a sound effect display instruction to the first terminal.
  • the server sends a sound effect display instruction to the first terminal, the sound effect display instruction carrying the first orientation and the sound intensity.
  • the sound effect display instruction also carries a preset behavior state.
  • Step 610 The first terminal displays the sound effect indication pattern centering on the first virtual object according to the sound effect display instruction.
  • the first terminal After receiving the sound effect display instruction, the first terminal determines a pattern parameter of the sound effect indication pattern according to the sound intensity, and the pattern parameter includes at least one of a size, an area, a contour width, and a number of lines; and the m orientation is different according to the first orientation. A sound effect indication pattern corresponding to the first orientation is selected in the sound effect indication pattern.
  • a second correspondence between the sound intensity and the pattern parameter and a fourth correspondence between the orientation and the pattern type are prestored in the first terminal.
  • the first terminal determines the sound effect indication pattern parameter according to the sound intensity, determines the pattern type of the sound effect indication pattern in the first orientation according to the first orientation, and displays the sound effect indication centering on the first virtual object after determining the pattern parameter and the sound effect indication pattern a pattern indicating that a second virtual object exists in a location area that is negatively correlated with the sound intensity in the first orientation.
  • the first orientation is the area 310 shown in FIG. 3, and the terminal determines, according to the first orientation, that the sound effect indication pattern is the sound effect indication pattern displayed in the area 310, the sound effect intensity is the first intensity, and the terminal determines the sound effect indication according to the first intensity.
  • the pattern parameter of the pattern is a wide outline centered on the first virtual object, and a sound effect indicating pattern having a wide outline width is displayed in the area 310.
  • the first terminal determines the pattern type of the sound effect indication pattern according to the preset behavior state. For details, refer to the embodiments of FIG. 2 and FIG. 9 , and details are not described herein.
  • the first effect indicates that the sound effect indication pattern is overlaid on the environment image.
  • the sound effect indication pattern can be displayed more prominently to the user, thereby avoiding interference of the environmental image.
  • the first distance between the first virtual object and the second virtual object is calculated by acquiring the coordinates of the second virtual object of the first virtual object in the first orientation, according to the first The distance determines the sound intensity, and according to the sound intensity, the sound effect indication pattern is displayed centering on the first virtual object, and the sound effect indication pattern is used to indicate that the second virtual object exists in the positional area having a negative correlation with the sound intensity, so that the sound can be displayed visually
  • the distance between the second virtual object and the first virtual object increases the realism of the virtual environment.
  • the first terminal displays the sound effect indication pattern centering on the first virtual object in the virtual environment, the sound effect indication pattern of the second virtual object of the same camp is caused to interfere with the user, and the information display efficiency in the virtual environment is improved.
  • determining the second virtual object in the candidate virtual object determining the sound intensity according to the first distance between the second virtual object and the first object, according to the sound intensity, the first virtual
  • the object-centered display sound effect indication pattern solves the display clutter caused by displaying the sound effect indication patterns of the plurality of candidate virtual objects in the same direction, interferes with the user's judgment, and improves the information display efficiency in the virtual environment.
  • the first terminal in the embodiment of the present application selects a sound effect indication pattern corresponding to the first orientation in the sound effect indication patterns of the plurality of different orientations according to the first orientation, because the sound effect indication pattern is sound effects in a limited orientation. Selecting the determination in the indication pattern solves the problem that the pointing of the sound effect indication pattern is too precise, and improves the simulation balance of the virtual environment.
  • FIG. 13 is a flowchart of a method for displaying information in a virtual environment provided by an exemplary embodiment of the present application.
  • the method is applicable to the first terminal 110 in the implementation environment as shown in FIG. 1, the method comprising:
  • Step 801 Display a first display screen of the virtual environment viewed from a top view, and display an object model of the first virtual object in the first display screen.
  • the first display screen 900 displayed in the first terminal is a virtual environment screen viewed from a top view, and the object model of the first virtual object 910 is displayed in the first display screen 900 .
  • Step 802 When there is a second virtual object in a preset behavior state in the first orientation of the first virtual object in the virtual environment, the sound effect indication pattern is displayed centering on the first virtual object, and the sound effect indication pattern is directed to the second virtual object. The direction of the location.
  • the sound effect indication pattern 930 is displayed centering on the location of the first virtual object 910.
  • the sound effect indication pattern 930 points in the direction of the location of the second virtual object 920.
  • the pattern parameter of the sound effect indication pattern is determined according to a first distance R between the first virtual object 910 and the second virtual object 920, and the pattern parameter includes at least one of a size, an area, a contour width, and a number of lines.
  • the first distance R between the first virtual object 910 and the second virtual object 920 is calculated to obtain the sound intensity.
  • the sound effect indication pattern is displayed centering on the first virtual object, and the sound effect indication pattern is directed to the direction of the location of the second virtual object, because the pattern parameter of the sound effect indication pattern is according to the first
  • the first distance between the virtual object and the second virtual object is determined, so that the distance between the second virtual object and the first virtual object can be visually displayed, which improves the realism of the virtual environment.
  • FIG. 15 shows a method flowchart of a method for displaying information in a virtual environment provided by an exemplary embodiment of the present application. This method can be applied to the implementation environment as shown in FIG.
  • the terminal stores two sets of coordinate data, (x, y) and (X, Y).
  • (x, y) is the coordinate when the virtual object is in the preset behavior state
  • (X, Y) is the coordinate when the virtual object is in any behavior state, that is, when the virtual object is in the preset state, its corresponding ( x, y) is the same as (X, Y).
  • the terminal only stores (X, Y) and does not store (x, y).
  • the preset behavior state includes at least one of a move, an attack, or a release skill.
  • the terminal detects, at predetermined intervals, whether the virtual object is performing a preset behavior such as moving, attacking, or releasing the skill. If the player performs the preset behavior, the coordinates (x, y) and (X, Y) of the virtual object are recorded; The coordinates (x, y) and (X, Y) are uploaded to the server.
  • a preset behavior such as moving, attacking, or releasing the skill.
  • the server stores the coordinates (x, y) and (X, Y) in an array as an array; according to the coordinates recorded in the stack, calculates the first distance between any two coordinates to determine the calibration direction; first determines the first distance Whether within the preset distance, whether the virtual objects corresponding to the two coordinates are the same camp; if the two coordinates are within the preset distance and not the same camp, it is determined whether the first distance between the two coordinates is within the first distance range If yes, the command is sent to the terminal to play a strong special effect in the calibration direction; if the first distance is not within the first distance range, it is determined whether the first distance is within the second distance range, and if so, the terminal is issued an instruction If the first distance is not within the second distance range, if the first distance is not within the second distance range, if the first distance is within the third distance range, if yes, the terminal sends an instruction to play the weak special effect in the calibration direction, if no , destroys the data of any two coordinates.
  • the special effect is the sound effect indication pattern in the above embodiment.
  • the server destroys the coordinate data of the completed operation to alleviate the stack pressure.
  • FIG. 16 is a structural block diagram of an information display apparatus in a virtual environment provided by an exemplary embodiment of the present application. As shown, the apparatus can be applied to a server 130 in an implementation environment as shown in FIG. 1, which includes an acquisition module 1110, a processing module 1120, and a transmission module 1130:
  • the obtaining module 1110 is configured to acquire a first coordinate of the first virtual object in the virtual environment, and acquire, according to the first coordinate, a second coordinate of the second virtual object that is in a preset behavior state in the first orientation of the first virtual object. .
  • the processing module 1120 is configured to calculate a first distance between the first virtual object and the second virtual object according to the first coordinate and the second coordinate, and acquire a sound intensity of the second virtual object in the first orientation according to the first distance.
  • the sending module 1130 is configured to send a sound effect display instruction to the first terminal corresponding to the first virtual object, where the sound effect display instruction is used to instruct the first terminal to display the sound effect indication pattern centering on the first virtual object in the virtual environment, the sound effect indication The pattern is used to indicate that there is a second virtual object in the location area that is negatively correlated with the sound intensity in the first orientation.
  • the sound effect display instruction includes a sound intensity strength, wherein the sound effect intensity is used to instruct the first terminal to determine a pattern parameter of the sound effect indication pattern according to the sound intensity, the pattern parameter includes a size, an area, a contour width, and a texture At least one of the numbers.
  • the sound effect display instruction includes a preset behavior state, where the preset behavior state is used to instruct the first terminal to determine a pattern type of the sound effect indication pattern according to the preset behavior state; and/or, the sound effect display instruction includes In one orientation, the first orientation is used to instruct the first terminal to select a sound effect indication pattern corresponding to the first orientation among the m different orientation sound effect indication patterns, where m is a natural number and m ⁇ 1.
  • the obtaining module 1110 is further configured to send a sound effect display instruction to the first terminal when the first virtual object and the second virtual object do not belong to the same camp.
  • the processing module 1120 is further configured to detect whether there are other virtual objects in the first orientation in a preset behavior state; the other virtual objects and the first virtual object do not belong to the same camp; When there is no other virtual object upward, the sound intensity of the second virtual object in the first direction is acquired according to the first distance.
  • the processing module 1120 is further configured to determine a target distance range in which the first distance is within n preset distance ranges, where the n preset distance ranges are non-overlapping and adjacent to each other.
  • the range of distance values, n is a natural number, n ⁇ 2; determining the sound intensity of the second virtual object in the first direction according to the target distance range.
  • the processing module 1120 is further configured to detect whether the first coordinate is a center of the circle, and whether the candidate virtual object in the preset behavior state exists in the detection area where the preset distance is a radius; when located in the detection area When there are at least two candidate virtual objects in the first orientation, the distance between each candidate virtual object and the first virtual object is calculated; the candidate virtual object closest to the first virtual object is determined as the second virtual object.
  • the obtaining module 1110 is further configured to acquire the first coordinate of the first virtual object in the virtual environment every preset time interval.
  • FIG. 17 is a structural block diagram of an information display apparatus in a virtual environment provided by an exemplary embodiment of the present application. As shown, the apparatus is applicable to the first terminal 110 in the implementation environment as shown in FIG. 1, the apparatus comprising a transmitting module 1210, a receiving module 1220, and a display module 1230:
  • the sending module 1210 is configured to send, to the server, a first coordinate of the first virtual object in the virtual environment.
  • the receiving module 1220 is configured to receive a sound effect display instruction sent by the server;
  • the display module 1230 is configured to display, according to the sound effect display instruction, a sound effect indication pattern centering on the first virtual object in the virtual environment, the sound effect indication pattern being used to indicate that the first aspect of the first virtual object is negatively correlated with the sound intensity There is a second virtual object in the location area of the relationship that is in the preset behavior state.
  • the sound effect display instruction is an instruction sent by the server after obtaining the sound intensity, and the sound intensity is obtained by the server according to the first distance between the first virtual object and the second virtual object, and the first distance is obtained by the server according to the first coordinate. After the second coordinate of the first virtual object in the first orientation, the first coordinate and the second coordinate are calculated.
  • the display module 1230 is further configured to determine a pattern parameter of the sound effect indication pattern according to the sound intensity, the pattern parameter including at least one of a size, an area, a contour width, and a number of lines.
  • the sound effect display instruction further includes a preset behavior state, and/or a first orientation
  • the display module 1230 is further configured to determine a pattern type of the sound effect indication pattern according to the preset behavior state; and/or select a sound effect indication pattern corresponding to the first orientation in the m different orientation sound effect indication patterns, where m is a natural number, M ⁇ 1.
  • the sending module 1210 is further configured to send the first coordinate to the server every preset time interval.
  • FIG. 18 is a structural block diagram of a computer device provided by an exemplary embodiment of the present application.
  • the computer is used to implement the information display method in the virtual environment on the server side provided in the above embodiment, and the computer device may be the server 130 in the embodiment of FIG. Specifically:
  • Computer device 1300 includes a central processing unit (CPU) 1301, a system memory 1304 including random access memory (RAM) 1302 and read only memory (ROM) 1303, and a system bus 1305 that connects system memory 1304 and central processing unit 1301.
  • Computer device 1300 also includes a basic input/output system (I/O system) 1306 that facilitates transfer of information between various devices within the computer, and a mass storage device for storing operating system 1313, applications 1314, and other program modules 1315. 1307.
  • CPU central processing unit
  • system memory 1304 including random access memory (RAM) 1302 and read only memory (ROM) 1303
  • system bus 1305 that connects system memory 1304 and central processing unit 1301.
  • Computer device 1300 also includes a basic input/output system (I/O system) 1306 that facilitates transfer of information between various devices within the computer, and a mass storage device for storing operating system 1313, applications 1314, and other program modules 1315. 1307.
  • I/O system basic input/output system
  • the basic input/output system 1306 includes a display 1308 for displaying information and an input device 1309 such as a mouse, keyboard for inputting information by the user.
  • the display 1308 and the input device 1309 are both connected to the central processing unit 1301 via an input-output controller 1310 connected to the system bus 1305.
  • the basic input/output system 1306 can also include an input output controller 1310 for receiving and processing input from a plurality of other devices, such as a keyboard, mouse, or electronic stylus.
  • the input and output controller 1310 also provides output to a display screen, printer, or other type of output device.
  • the mass storage device 1307 is connected to the central processing unit 1301 by a mass storage controller (not shown) connected to the system bus 1305.
  • the mass storage device 1307 and its associated computer readable medium provide non-volatile storage for the computer device 1300. That is, the mass storage device 1307 can include a computer readable medium (not shown) such as a hard disk or a CD-ROM drive.
  • Computer readable media may comprise computer storage media and communication media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media include RAM, ROM, EPROM, EEPROM, flash memory or other solid state storage technologies, CD-ROM, DVD or other optical storage, tape cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices.
  • RAM random access memory
  • ROM read only memory
  • EPROM Erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • computer device 1300 may also be operated by a remote computer connected to the network via a network such as the Internet. That is, computer device 1300 can be connected to network 1312 via network interface unit 1311 coupled to system bus 1305, or network interface unit 1311 can be used to connect to other types of networks or remote computer systems (not shown).
  • Memory 1304 also includes one or more programs, one or more of which are stored in memory 1304 and configured to be executed by one or more processors.
  • the one or more programs described above include instructions for performing the information display method in the virtual environment on the server side provided in the above embodiment.
  • FIG. 19 is a block diagram showing the structure of a terminal 1400 according to an exemplary embodiment of the present invention.
  • the terminal 1400 can be a portable mobile terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III), and a moving picture experts group Audio Layer IV (MP4).
  • the video specialist compresses the standard audio layer 4) the player.
  • Terminal 1400 may also be referred to as a user device, a portable terminal, or the like.
  • the terminal 1400 includes a processor 1401 and a memory 1402.
  • Processor 1401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 1401 may be configured by at least one of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). achieve.
  • the processor 1401 may also include a main processor and a coprocessor.
  • the main processor is a processor for processing data in an awake state, which is also called a CPU (Central Processing Unit); the coprocessor is A low-power processor for processing data in standby.
  • the processor 1401 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and rendering of content that needs to be displayed on the display screen.
  • the processor 1401 may further include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
  • AI Artificial Intelligence
  • Memory 1402 can include one or more computer readable storage media that can be tangible and non-transitory.
  • the memory 1402 may also include high speed random access memory, as well as non-volatile memory such as one or more magnetic disk storage devices, flash memory storage devices.
  • the non-transitory computer readable storage medium in memory 1402 is for storing at least one instruction for execution by processor 1401 to implement information in a virtual environment provided in the present application. Display method.
  • the terminal 1400 optionally further includes: a peripheral device interface 1403 and at least one peripheral device.
  • the peripheral device includes at least one of a radio frequency circuit 1404, a touch display screen 1405, a camera 1406, an audio circuit 1407, a positioning component 1408, and a power source 1409.
  • Peripheral device interface 1403 can be used to connect at least one peripheral device associated with an I/O (Input/Output) to processor 1401 and memory 1402.
  • processor 1401, memory 1402, and peripheral interface 1403 are integrated on the same chip or circuit board; in some other embodiments, any one of processor 1401, memory 1402, and peripheral interface 1403 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the RF circuit 1404 is configured to receive and transmit an RF (Radio Frequency) signal, also referred to as an electromagnetic signal.
  • Radio frequency circuit 1404 communicates with the communication network and other communication devices via electromagnetic signals.
  • the RF circuit 1404 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal.
  • the radio frequency circuit 1404 includes an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and the like.
  • the radio frequency circuit 1404 can communicate with other terminals via at least one wireless communication protocol.
  • the wireless communication protocols include, but are not limited to, the World Wide Web, a metropolitan area network, an intranet, generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks.
  • the RF circuit 1404 may also include NFC (Near Field Communication) related circuitry, which is not limited in this application.
  • the touch display screen 1405 is used to display a UI (User Interface).
  • the UI can include graphics, text, icons, video, and any combination thereof.
  • Touch display 1405 also has the ability to capture touch signals over the surface or surface of touch display 1405.
  • the touch signal can be input to the processor 1401 as a control signal for processing.
  • Touch display 1405 is used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards.
  • the touch display screen 1405 can be one, and the front panel of the terminal 1400 is disposed; in other embodiments, the touch display screen 1405 can be at least two, respectively disposed on different surfaces of the terminal 1400 or in a folded design.
  • the touch display 1405 can be a flexible display disposed on a curved surface or a folded surface of the terminal 1400. Even the touch display screen 1405 can be set to a non-rectangular irregular pattern, that is, a profiled screen.
  • the touch display screen 1405 can be prepared by using an LCD (Liquid Crystal Display) or an OLED (Organic Light-Emitting Diode).
  • Camera component 1406 is used to capture images or video.
  • camera assembly 1406 includes a front camera and a rear camera.
  • the front camera is used for video calls or self-timer
  • the rear camera is used for photo or video capture.
  • the rear camera is at least two, which are respectively a main camera, a depth of field camera, and a wide-angle camera, so that the main camera and the depth of field camera are combined to realize the background blur function, and the main camera and the wide-angle camera are integrated.
  • Panoramic shooting and VR (Virtual Reality) shooting can also include a flash.
  • the flash can be a monochrome temperature flash or a two-color temperature flash.
  • the two-color temperature flash is a combination of a warm flash and a cool flash that can be used for light compensation at different color temperatures.
  • the audio circuit 1407 is for providing an audio interface between the user and the terminal 1400.
  • the audio circuit 1407 can include a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals for processing to the processor 1401 for processing, or input to the RF circuit 1404 for voice communication.
  • the microphones may be multiple, and are respectively disposed at different parts of the terminal 1400.
  • the microphone can also be an array microphone or an omnidirectional acquisition microphone.
  • the speaker is then used to convert electrical signals from the processor 1401 or the RF circuit 1404 into sound waves.
  • the speaker can be a conventional film speaker or a piezoelectric ceramic speaker.
  • the audio circuit 1407 can also include a headphone jack.
  • the positioning component 1408 is configured to locate the current geographic location of the terminal 1400 to implement navigation or LBS (Location Based Service).
  • the positioning component 1408 can be a positioning component based on a US-based GPS (Global Positioning System), a Chinese Beidou system, or a Russian Galileo system.
  • a power supply 1409 is used to power various components in the terminal 1400.
  • the power source 1409 can be an alternating current, a direct current, a disposable battery, or a rechargeable battery.
  • the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery.
  • a wired rechargeable battery is a battery that is charged by a wired line
  • a wireless rechargeable battery is a battery that is charged by a wireless coil.
  • the rechargeable battery can also be used to support fast charging technology.
  • terminal 1400 also includes one or more sensors 1410.
  • the one or more sensors 1410 include, but are not limited to, an acceleration sensor 1411, a gyro sensor 1412, a pressure sensor 1413, a fingerprint sensor 1414, an optical sensor 1415, and a proximity sensor 1416.
  • the acceleration sensor 1411 can detect the magnitude of the acceleration on the three coordinate axes of the coordinate system established by the terminal 1400.
  • the acceleration sensor 1411 can be used to detect components of gravity acceleration on three coordinate axes.
  • the processor 1401 can control the touch display screen 1405 to display the user interface in a landscape view or a portrait view according to the gravity acceleration signal collected by the acceleration sensor 1411.
  • the acceleration sensor 1411 can also be used for the acquisition of game or user motion data.
  • the gyro sensor 1412 can detect the body direction and the rotation angle of the terminal 1400, and the gyro sensor 1412 can cooperate with the acceleration sensor 1411 to collect the 3D motion of the user to the terminal 1400. Based on the data collected by the gyro sensor 1412, the processor 1401 can implement functions such as motion sensing (such as changing the UI according to the user's tilting operation), image stabilization at the time of shooting, game control, and inertial navigation.
  • functions such as motion sensing (such as changing the UI according to the user's tilting operation), image stabilization at the time of shooting, game control, and inertial navigation.
  • the pressure sensor 1413 can be disposed on a side border of the terminal 1400 and/or a lower layer of the touch display screen 1405.
  • the pressure sensor 1413 is disposed at the side frame of the terminal 1400, the user's holding signal to the terminal 1400 can be detected, and the left and right hand recognition or shortcut operation can be performed according to the holding signal.
  • the operability control on the UI interface can be controlled according to the user's pressure operation on the touch display screen 1405.
  • the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 1414 is configured to collect a fingerprint of the user to identify the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1401 authorizes the user to perform related sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying and changing settings, and the like. Fingerprint sensor 1414 can be provided with the front, back or side of terminal 1400. When the physical button or vendor logo is provided on the terminal 1400, the fingerprint sensor 1414 can be integrated with the physical button or the manufacturer logo.
  • Optical sensor 1415 is used to collect ambient light intensity.
  • the processor 1401 can control the display brightness of the touch display 1405 based on the ambient light intensity acquired by the optical sensor 1415. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1405 is raised; when the ambient light intensity is low, the display brightness of the touch display screen 1405 is lowered.
  • the processor 1401 can also dynamically adjust the shooting parameters of the camera assembly 1406 based on the ambient light intensity acquired by the optical sensor 1415.
  • Proximity sensor 1416 also referred to as a distance sensor, is typically disposed on the front side of terminal 1400. Proximity sensor 1416 is used to capture the distance between the user and the front of terminal 1400. In one embodiment, when the proximity sensor 1416 detects that the distance between the user and the front side of the terminal 1400 is gradually decreasing, the touch screen 1405 is controlled by the processor 1401 to switch from the bright screen state to the interest screen state; when the proximity sensor 1416 detects When the distance between the user and the front side of the terminal 1400 gradually becomes larger, the processor 1401 controls the touch display screen 1405 to switch from the state of the screen to the bright state.
  • FIG. 19 does not constitute a limitation to terminal 1400, may include more or fewer components than illustrated, or may combine certain components, or employ different component arrangements.
  • the application further provides a computer readable storage medium, where the storage medium stores at least one instruction, at least one program, a code set or a set of instructions, the at least one instruction, the at least one program, the code set or The instruction set is loaded by the processor and executed to implement the information display method in the virtual environment provided by the above method embodiment.
  • the present application also provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the information display method in the virtual environment described in the above aspects.
  • a plurality as referred to herein means two or more.
  • "and/or” describing the association relationship of the associated objects, indicating that there may be three relationships, for example, A and/or B, which may indicate that there are three cases where A exists separately, A and B exist at the same time, and B exists separately.
  • the character "/" generally indicates that the contextual object is an "or" relationship.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Stereophonic System (AREA)

Abstract

一种虚拟环境中的信息显示方法、装置、设备及存储介质。其中方法包括:获取第一虚拟对象(900)在虚拟环境中的第一坐标;根据第一坐标,获取位于第一虚拟对象(900)的第一朝向上处于预设行为状态的第二虚拟对象(920)的第二坐标;根据第一坐标和第二坐标计算第一虚拟对象(900)和第二虚拟对象(920)之间的第一距离;根据第一距离获取第二虚拟对象(920)在第一朝向上的音效强度;向第一虚拟对象(900)对应的第一终端发送音效显示指令。通过以第一虚拟对象(900)为中心显示音效指示图案(930),由于音效指示图案(930)用于表示与音效强度呈负相关关系的位置区域中存在第二虚拟对象(920),因此能够直观地显示第二虚拟对象(920)和第一虚拟对象(900)之间距离的远近,提高了虚拟环境的真实度。

Description

虚拟环境中的信息显示方法、装置、设备及存储介质
本申请要求于2018年04月27日提交的申请号为201810391238.X、发明名称为“虚拟环境中的信息显示方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别涉及一种虚拟环境中的信息显示方法、装置、设备及存储介质。
背景技术
目前存在很多基于虚拟环境的应用程序,比如多人在线的战术竞技游戏、军事仿真程序等。该应用程序提供的虚拟环境可以供一个或多个虚拟人物进行活动,虚拟人物的活动包括移动、跳跃、攻击、释放技能等。
相关技术中,虚拟环境中的音效播放方法包括:服务器获取同一虚拟环境中每个虚拟对象的行为状态;根据每个虚拟对象的行为状态确定每个虚拟对象的音效;向终端发送播放指令,该播放指令用于指示终端播放每个虚拟对象各自对应的音效。比如,虚拟人物A处于攻击状态,则终端播放攻击音效;又比如,虚拟人物B处于释放技能状态,则终端播放释放技能的音效。
发明内容
本申请实施例提供了一种虚拟环境中的信息显示方法、装置、设备及存储介质。一方面,本申请实施例提供了一种虚拟环境中的信息显示方法,所述方法包括:
获取所述第一虚拟对象在所述虚拟环境中的第一坐标;
根据所述第一坐标,获取位于所述第一虚拟对象的第一朝向上处于预设行为状态的第二虚拟对象的第二坐标;
根据所述第一坐标和所述第二坐标计算所述第一虚拟对象和所述第二虚拟对象之间的第一距离;
根据所述第一距离获取所述第二虚拟对象在所述第一朝向上的音效强度;
向所述第一虚拟对象对应的第一终端发送音效显示指令,所述音效显示指,令用于指示所述第一终端在所述虚拟环境中以所述第一虚拟对象为中心显示音效指示图案,所述音效指示图案用于表示沿所述第一朝向上与所述音效强度呈负相关关系的位置区域中存在所述第二虚拟对象。
一方面,本申请实施例提供了一种虚拟环境中的信息显示方法,所述方法包括:
向服务器发送第一虚拟对象在所述虚拟环境中的第一坐标;
接收所述服务器发送的音效显示指令;
根据所述音效显示指令,在所述虚拟环境中以所述第一虚拟对象为中心显示音效指示图案,所述音效指示图案用于表示沿所述第一虚拟对象的第一朝向上与音效强度呈负相关关系 的位置区域中存在处于预设行为状态的第二虚拟对象;
其中,所述音效显示指令是所述服务器获取得到音效强度后发送的指令,所述音效强度是所述服务器根据所述第一虚拟对象和所述第二虚拟对象之间的第一距离获取的,所述第一距离是所述服务器根据所述第一坐标获取位于所述第一朝向上的所述第二虚拟对象的第二坐标后,根据所述第一坐标和所述第二坐标计算得到的。
一方面,本申请实施例提供了一种虚拟环境中的信息显示装置,所述装置包括:
获取模块,用于获取所述第一虚拟对象在所述虚拟环境中的第一坐标;根据所述第一坐标,获取位于所述第一虚拟对象的第一朝向上处于预设行为状态的第二虚拟对象的第二坐标;
处理模块,用于根据所述第一坐标和所述第二坐标计算所述第一虚拟对象和所述第二虚拟对象之间的第一距离;根据所述第一距离获取所述第二虚拟对象在所述第一朝向上的音效强度;
发送模块,用于向所述第一虚拟对象对应的第一终端发送音效显示指令,所述音效显示指令用于指示所述第一终端在所述虚拟环境中以所述第一虚拟对象为中心显示音效指示图案,所述音效指示图案用于表示沿所述第一朝向上与所述音效强度呈负相关关系的位置区域中存在所述第二虚拟对象。
一方面,本申请实施例提供了一种虚拟环境中的信息显示装置,所述装置包括:
发送模块,用于向服务器发送第一虚拟对象在所述虚拟环境中的第一坐标;
接收模块,用于接收所述服务器发送的音效显示指令;
显示模块,用于根据所述音效显示指令,在所述虚拟环境中以所述第一虚拟对象为中心显示音效指示图案,所述音效指示图案用于表示沿所述第一虚拟对象的第一朝向上与音效强度呈负相关关系的位置区域中存在处于预设行为状态的第二虚拟对象;
其中,所述音效显示指令是所述服务器获取得到音效强度后发送的指令,所述音效强度是所述服务器根据所述第一虚拟对象和所述第二虚拟对象之间的第一距离获取的,所述第一距离是所述服务器根据所述第一坐标获取位于所述第一朝向上的所述第二虚拟对象的第二坐标后,根据所述第一坐标和所述第二坐标计算得到的。
一方面,本申请实施例提供了一种电子设备,所述电子设备包括处理器和存储器,所述存储器中存储有至少一条指令,所述至少一条指令由所述处理器加载并执行以实现如上所述的虚拟环境中的信息显示方法。
一方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条指令,至少一条指令由处理器加载并执行以实现如上所述的虚拟环境中的信息显示方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一个示例性实施例提供的虚拟环境中的信息显示方法的实施环境的示意图;
图2是本申请一个示例性实施例提供的虚拟环境中的信息显示方法的方法流程图;
图3是本申请一个示例性实施例提供的虚拟对象的标定方向示意图;
图4是本申请一个示例性实施例提供的音效指示图案示意图;
图5是本申请一个示例性实施例提供的音效指示图案示意图;
图6是本申请一个示例性实施例提供的音效指示图案示意图;
图7是本申请一个示例性实施例提供的音效指示图案示意图;
图8是本申请一个示例性实施例提供的音效指示图案示意图;
图9是本申请一个示例性实施例提供的虚拟环境中的信息显示方法的方法流程图;
图10是本申请一个示例性实施例提供的预设距离范围的示意图;
图11是本申请一个示例性实施例提供的预设距离范围的示意图;
图12是本申请一个示例性实施例提供的预设距离范围的示意图;
图13是本申请一个示例性实施例提供的虚拟环境中的信息显示方法的方法流程图;
图14是本申请一个示例性实施例提供的虚拟环境中的信息显示方法的显示界面示意图;
图15是本申请一个示例性实施例提供的虚拟环境中的信息显示方法的方法流程图;
图16是本申请一个示例性实施例提供的虚拟环境中的信息显示装置的结构框图;
图17是本申请一个示例性实施例提供的虚拟环境中的信息显示装置的结构框图;
图18是本申请一个示例性实施例提供的电子计算机设备的结构框图;
图19是本申请一个示例性实施例提供的终端的结构框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
首先,对本申请实施例中涉及的名词进行解释。
虚拟环境:是应用程序在终端上运行时提供的虚拟环境。虚拟环境可以是对真实世界的仿真环境,也可以是半仿真半虚构的环境,也可以是纯虚构的环境。该虚拟环境可以是二维虚拟环境,也可以是三维虚拟环境。
虚拟对象:是指在虚拟环境中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物等。可选地,虚拟对象是基于动画骨骼技术创建的三维立体模型。每个虚拟对象在虚拟环境中具有自身的形状和体积,占据虚拟环境中的一部分空间。
俯视视角:是指以俯视角度观察虚拟环境的视角。例如,通常多人在线竞技游戏是以45度俯视的角度观察虚拟环境,观察的摄像头位于虚拟环境的上方,以俯视的角度观察虚拟环境。
坐标:是虚拟环境中每个虚拟对象的参考点坐标值,该参考点可以是虚拟对象的头部、肩部、脚部或胸部上的预设像素点。示例性的,若虚拟环境是二维虚拟环境,则虚拟对象的坐标为(X,Y),其中,X表示虚拟对象在虚拟环境中的横坐标,Y表示虚拟对象在虚拟环境中的纵坐标;若虚拟环境是三维虚拟环境,则虚拟对象的坐标为(X,Y,Z),其中,X通常表示沿虚拟环境地平面东西方向上的坐标,Y通常表示沿虚拟环境地平面南北方向上的坐标,Z通常表示沿虚拟环境地平面垂直方向上的坐标。
音效指示图案:是指以虚拟对象为中心,在虚拟对象的附近(比如周围地面上)将音效图像化的图案。示例性的,该音效指示图案可以是一个抽象化的波形图像。
相关技术中,终端同时播放同一虚拟环境中每个虚拟对象的音效,用户无法根据每个虚拟对象的音效判断每个虚拟对象之间距离的远近,从而导致虚拟环境的真实度较差。本申请实施例提供的技术方案通过获取第一虚拟对象在第一朝向上的第二虚拟对象的坐标,计算第一虚拟对象和第二虚拟对象之间的第一距离,根据第一距离确定音效强度,根据音效强度,以第一虚拟对象为中心显示音效指示图案,由于音效指示图案用于表示与音效强度呈负相关关系的位置区域中存在第二虚拟对象,因此能够直观地显示第二虚拟对象和第一虚拟对象之间距离的远近,提高了虚拟环境的真实度。
请参考图1,其示出了本申请一个示例性实施例提供的虚拟环境中的信息显示方法的实施环境的示意图。如图1所示,该实施环境包括第一终端110、第二终端120和服务器130,第一终端110和服务器130通过有线或无线网络建立通信连接,第二终端120通过有线或无线网络和服务器130建立通信连接。需要说明的是,本申请实施例中,服务器130与至少两个终端通信连接,在附图中以第一终端110和第二终端120指代至少两个终端。
用户通过第一终端110操控虚拟环境中的第一虚拟对象,第一终端110将第一虚拟对象的第一行为状态和第一坐标发送至服务器。
用户通过第二终端110操控虚拟环境中的第二虚拟对象,第二终端120将第二虚拟对象的第二行为状态和第二坐标发送至服务器。
服务器130根据第一坐标,检测位于第一虚拟对象的第一朝向上的第二虚拟对象的第二行为状态是否是预设行为状态,若第二行为状态是预设行为状态,则根据第一坐标和第二坐标计算第一虚拟对象和第二虚拟对象之间的第一距离;根据第一距离获取第二虚拟对象在第一朝向上的音效强度;向第一终端110发送音效显示指令。
终端110接收服务器130发送的音效显示指令,根据音效指示指令在虚拟环境中以第一虚拟对象为中心显示音效指示图案,该音效指示图案用于表示沿第一朝向上与音效强度呈负相关关系的位置区域中存在第二虚拟对象。
示例性的,虚拟环境可以是俯视视角下的多人竞技在线游戏,第一虚拟对象是多人竞技在线游戏中第一玩家控制的第一英雄人物,第二虚拟对象是多人竞技在线游戏中第二玩家控制的第二英雄人物,当第二英雄人物接近第一英雄人物时,在以第一英雄人物为中心显示音效指示图案,该音效指示图案指向第二英雄人物所在的位置,音效指示图案的尺寸、面积、轮廓宽度、纹路个数中的至少一种与第一英雄人物和第二英雄人物之间的距离正相关。
请参考图2,其示出了本申请一个示例性实施例提供的虚拟环境中的信息显示方法的方法流程图。该方法可应用于如图1所示的实施环境中的服务器130中,该方法包括:
步骤201,获取第一虚拟对象在虚拟环境中的第一坐标。
第一虚拟对象对应的第一终端向服务器发送第一虚拟对象的第一行为状态,以及第一虚拟对象在虚拟环境中的第一坐标,服务器接收该第一行为状态和第一坐标。
示例性的,第一终端每隔预设时间间隔向服务器发送第一行为状态和第一坐标;或,第一终端在确定第一行为状态发生变化时向服务器发送第一行为状态和第一坐标。
步骤202,根据第一坐标,获取位于第一虚拟对象的第一朝向上处于预设行为状态的第二虚拟对象的第二坐标。
服务器根据第一坐标,检测位于第一虚拟对象的第一朝向上是否存在处于预设行为状态 的第二虚拟对象,若第一朝向上存在处于预设行为状态的第二虚拟对象,则获取得到第二虚拟对象的第二坐标。其中,第二虚拟对象的行为状态是第二虚拟对象对应的第二终端向服务器发送的。
示例性的,如图3所示,服务器以第一坐标300为圆心,以预设距离为半径确定一个圆形的检测区域,通过穿过第一坐标的直线将检测区域划分为相同面积的多个区域310、320、330、340、350、360、370、380,每个区域为一个朝向,第一朝向为该多个区域中的任意一个区域。例如,第一朝向为区域310,服务器检测区域310内是否存在处于预设行为状态的第二虚拟对象,若区域310内存在处于预设行为状态的第二虚拟对象,则获取得到第二虚拟对象的第二坐标。
可选的,当服务器确定第一朝向上存在处于预设行为的第二虚拟对象,且第一虚拟对象和第二虚拟对象不属于同一阵营时,执行获取第二坐标的步骤。
服务器判断第一虚拟对象和第二虚拟对象是否属于同一阵营,若第一虚拟对象和第二虚拟对象属于同一阵营,则执行获取第二坐标的步骤;若第一虚拟对象和第二虚拟对象不属于同一阵营,则步骤停止。
示例性的,阵营是指在多人在线竞技游戏中玩家所控制的英雄所属的团队。例如,第一英雄人物属于红队,第二英雄人物属于蓝队,红队和蓝队是敌对关系,则第一英雄人物和第二英雄人物不属于同一阵营;若第一英雄人物属于红队,第二英雄人物属于黄队,红队和黄队是联盟关系,则第一英雄人物和第二英雄人物属于同一阵营;若第一英雄人物和第二英雄人物都属于红队,则第一英雄人物和第二英雄人物属于同一阵营。
示例性的,预设行为状态是终端中预设的虚拟对象的行为状态,例如,预设行为状态包括虚拟对象的移动、攻击和技能释放中的至少一种。
步骤203,根据第一坐标和第二坐标计算第一虚拟对象和第二虚拟对象之间的第一距离。
示例性的,在二维虚拟环境中,第一坐标为(X1,Y1),第二坐标为(X2,Y2),则服务器根据第一坐标和第二坐标计算得到第一虚拟对象和第二虚拟对象之间的第一距离为
Figure PCTCN2019080125-appb-000001
示例性的,在三维虚拟环境中,第一坐标为(X1,Y1,Z1),第二坐标为(X2,Y2,Z2),则服务器根据第一坐标和第二坐标计算得到第一虚拟对象和第二虚拟对象之间的第一距离为
Figure PCTCN2019080125-appb-000002
步骤204,根据第一距离获取第二虚拟对象在第一朝向上的音效强度。
服务器根据第一距离获取第二虚拟对象在第一朝向上的音效强度。其中,音效强度用于指示第二虚拟对象的预设行为状态对第一虚拟对象施加的音效的强度,该音效强度和第一距离呈反比关系。第一距离越小,即,第二虚拟对象和第一虚拟对象越近,音效强度越大;第一距离越大,即,第二虚拟对象和第一虚拟对象越远,音效强度越小。例如,预设行为状态为移动时,第二虚拟对象在移动过程中会产生脚步声,第二虚拟对象和第一虚拟对象之间的第一距离越小,脚步声产生的音效强度就越大,第二虚拟对象和第一虚拟对象之间的第一距离越大,脚步声产生的音效强度就越小。
示例性的,服务器中存储有距离和音效强度的第一对应关系,服务器可根据第一距离在第一对应关系中查询得到第一距离对应的音效强度。
步骤205,向第一虚拟对象对应的第一终端发送音效显示指令。
服务器在获取了音效强度后,向第一终端发送音效显示指令,该音效显示指令携带有音 效强度。
第一终端在接收到音效显示指令后,以第一虚拟对象为中心显示音效指示图案,音效指示图案用于表示沿第一朝向上与音效强度呈负相关关系的位置区域中存在第二虚拟对象。
可选的,音效显示指令中包括音效强度,第一终端根据音效强度确定音效指示图案的图案参数,图案参数包括尺寸、面积、轮廓宽度、纹路个数中的至少一种。
示例性的,第一终端中预存有音效强度和图案参数的第二对应关系,第一终端根据音效强度查询第二对应关系,确定音效指示图案参数。
如图4所示,当音效强度为第一强度时,第一终端在虚拟环境400中以第一虚拟对象410为中心显示的音效指示图案420,该音效指示图案的图案参数为宽型轮廓,该图案参数为第一强度对应的图案参数。
如图5所示,当音效强度为第二强度时,第一终端在虚拟环境400中以第一虚拟对象410为中心显示的音效指示图案420,该音效指示图案的图案参数为中型轮廓,该图案参数为第二强度对应的图案参数。
如图6所示,当音效强度为第三强度时,第一终端在虚拟环境400中以第一虚拟对象410为中心显示的音效指示图案420,该音效指示图案的图案参数为窄型轮廓,该图案参数为第三强度对应的图案参数。
可选的,音效显示指令中还携带有第二虚拟对象的预设行为状态,使第一终端根据预设行为状态确定音效指示图案的图案参数。
示例性的,第一终端中预存有预设行为状态和图案类型的第三对应关系,第一终端根据预设行为状态查询第三对应关系,确定音效指示图案的图案类型。
第一终端接收到音效显示指令后,根据预设行为状态确定音效指示图案的图案类型;和/或,根据音效强度确定音效指示图案的图案参数,图案参数包括尺寸、面积、轮廓宽度、纹路个数中的至少一种。
示例性的,如图7所示,当预设行为状态是移动,音效强度为第一强度时,移动对应的图案类型为波形图案,第一强度对应的图案参数为宽型轮廓宽度,则第一终端在虚拟环境500中以第一虚拟对象410为中心显示的音效指示图案510为波形图案,该波形图案的轮廓宽度为宽型。
如图8所示,当预设行为状态是释放技能,音效强度为第二强度时,释放技能对应的图案类型为扇形图案,第二强度对应的图案参数为窄型轮廓宽度,则第一终端在虚拟环境500中以第一虚拟对象410为中心显示的音效指示图案510为扇形图案,该扇形图案的轮廓宽度为窄型。
综上所述,本申请实施例中,通过获取第一虚拟对象在第一朝向上的第二虚拟对象的坐标,计算第一虚拟对象和第二虚拟对象之间的第一距离,根据第一距离确定音效强度,根据音效强度,以第一虚拟对象为中心显示音效指示图案,由于音效指示图案用于表示与音效强度呈负相关关系的位置区域中存在第二虚拟对象,因此能够直观地显示第二虚拟对象和第一虚拟对象之间距离的远近,提高了虚拟环境的真实度。
可选的,本申请实施例中,通过判断第一虚拟对象和第二虚拟对象是否属于同一阵营,当第一虚拟对象和第二虚拟对象不属于同一阵营时,执行获取第二坐标的步骤,解决了第一终端在虚拟环境中以第一虚拟对象为中心显示音效指示图案时,显示同一阵营的第二虚拟对 象的音效指示图案对用户造成干扰,提高了虚拟环境中的信息显示效率。
请参考图9,其示出了本申请一个示例性实施例提供的虚拟环境中的信息显示方法的方法流程图。该方法可应用于如图1所示的实施环境中,所述方法包括:
步骤601,第一终端每隔预设时间间隔向服务器发送第一虚拟对象在虚拟环境中的第一坐标。
第一终端通过有线或无线网络每隔预设时间间隔向服务器发送第一虚拟对象在虚拟环境中的第一坐标。
可选的,第一终端检测第一虚拟对象是否处于预设行为状态,当第一虚拟对象处于预设行为状态时,向服务器发送第一虚拟对象所处的预设行为状态和第一坐标;当第一虚拟对象不处于预设行为状态时,第一终端向服务器发送第一坐标。
步骤602,第二终端每隔预设时间间隔向服务器发送第二虚拟对象的预设状态,以及第二虚拟对象在虚拟环境中的第二坐标。
第二终端检测第二虚拟对象是否处于预设行为状态,当第二虚拟对象处于预设行为状态时,通过有线或无线网络每隔预设时间间隔向服务器发送第二虚拟对象所处的预设行为状态,以及第二虚拟对象在虚拟环境中的第二坐标;当第二虚拟对象不处于预设行为状态时,第二终端向服务器发送第二坐标。
步骤603,服务器根据第一坐标,检测第一坐标为圆心,预设距离为半径的检测区域内是否存在处于预设行为状态,且与第一虚拟对象不属于同一阵营的候选虚拟对象。
服务器根据第一坐标,检测以第一坐标为圆心,预设距离为半径的检测区域内是否存在候选虚拟对象。其中,候选虚拟对象是处于预设行为状态,且与第一虚拟对象不属于同一阵营的虚拟对象。
预设距离可根据需求设置。示例性的,如图10所示,以第一坐标700为圆心,以预设距离r1为半径的检测区域710全部位于虚拟环境的显示区域720中,检测区域710并未覆盖全部显示区域720;如图11所示,以第一坐标700为圆心,以预设距离r2为半径的检测区域710部分位于虚拟环境的显示区域720中,检测区域710并未覆盖全部显示区域720;如图12所示,以第一坐标700为圆心,以预设距离r3为半径的检测区域710部分位于虚拟环境的显示区域720中,检测区域710覆盖全部显示区域720。
步骤604,当检测区域中的第一朝向上存在至少两个候选虚拟对象时,服务器获取该至少两个候选虚拟对象的坐标。
服务器检测到第一坐标的第一朝向上存在至少两个候选虚拟对象,则获取该至少两个候选虚拟对象的坐标。其中,候选虚拟对象的坐标是候选虚拟对象对应终端通过有线或无线网络发送至服务器的。
步骤605,服务器根据候选虚拟对象的坐标以及第一坐标,计算每个候选虚拟对象与第一虚拟对象之间的距离。
服务器根据该至少两个候选虚拟对象的坐标以及第一虚拟对象的第一坐标,计算该至少两个候选虚拟对象与第一虚拟对象之间的距离。服务器计算候选虚拟对象和第一虚拟对象之间的距离的方法可参考图2实施例中的步骤203,在此不做赘述。
步骤606,服务器将距离最小的候选对象作为第二虚拟对象,将第二虚拟对象和第一虚拟对象之间的距离作为第一距离。
服务器根据计算得到的距离,将距离中最小的候选对象作为第二虚拟对象,将第二虚拟对象和第一虚拟对象之间的距离作为第一距离。
步骤607,服务器确定第一距离在n个预设距离范围内所处的目标距离范围。
服务器确定第一距离在n个预设距离范围内所处的目标距离范围。其中,n个预设距离范围是互不重叠且首尾相邻的距离取值范围,n为自然数,n≥2。
示例性的,第一距离为7100个距离单位,服务器中存储有三个首尾相接的预设距离范围,分别为第一距离范围(0个距离单位至7000个距离单位),第二距离范围(7000个距离单元至13999个距离单位)以及第三距离范围(13999个距离单位至21000个距离单位),服务器确定第一距离属于第二距离范围,故第二距离范围为目标距离范围。
步骤608,服务器根据目标距离范围确定第二虚拟对象在第一朝向上的音效强度。
服务器根据第一距离在n个预设距离范围内所述的目标距离范围,确定第二虚拟对象在第一朝向上的音效强度。
示例性的,预设距离范围包括两个距离范围,第一预设距离范围为0至第一距离阈值,第二预设距离范围为第一距离阈值至预设距离。服务器根据第一距离首先判断第一距离是否小于第一距离阈值,若第一距离小于第一距离阈值,则落入第一预设距离范围,确定第二虚拟对象在第一朝向上的音效强度为第一强度;若第一距离不小于第一距离阈值,则落入第二预设距离范围,确定第二虚拟对象在第一朝向上的音效强度为第二强度,其中,第一强度大于第二强度。
示例性的,预设距离范围包括三个距离范围,第一预设距离范围为0至第一距离阈值,第二预设距离范围为第一距离阈值至第二距离阈值,第三预设距离范围为第二距离阈值至预设距离。服务器根据第一距离首先判断第一距离是否小于第一距离阈值,若第一距离小于第一距离阈值,则落入第一预设距离范围,确定第二虚拟对象在第一朝向上的音效强度为第一强度;若第一距离不小于第一距离阈值,则判断第一距离是否小于第二距离阈值,若第一距离小于第二距离阈值,则落入第二预设距离范围,确定第二虚拟对象在第一朝向上的音效强度为第二强度;若第一距离不小于第二距离阈值,则落入第三预设距离范围,确定第二虚拟对象在第一朝向上的音效强度为第三强度,其中,第一强度大于第二强度,第二强度大于第三强度。
示例性的,服务器中存储有三个首尾相接的预设距离范围,分别为第一距离范围(0个距离单位至7000个距离单位),第二距离范围(7000个距离单元至13999个距离单位)以及第三距离范围(13999个距离单位至21000个距离单位)。服务器计算得到第一距离后,首先判断第一距离是否在第一距离范围内,若第一距离在第一距离范围内,则确定第二虚拟对象在第一朝向上的音效强度为强;若第一距离不在第一距离范围内,则判断第一距离是否在第二距离范围内,若第一距离在第二距离范围内,则确定第二虚拟对象在第一朝向上的音效强度为中;若第一距离不在第二距离范围内,则判断第一距离是否在第三范围内,若第一距离在第三范围内,则确定第二虚拟对象在第一朝向上的音效强度为弱;若第一距离超过第三范围,则删除所有参与运算过的坐标。
步骤609,服务器向第一终端发送音效显示指令。
服务器向第一终端发送音效显示指令,该音效显示指令携带有第一朝向和音效强度。可选的,该音效显示指令还携带有预设行为状态。
步骤610,第一终端根据音效显示指令,以第一虚拟对象为中心显示音效指示图案。
第一终端在接收到音效显示指令后,根据音效强度确定音效指示图案的图案参数,图案参数包括尺寸、面积、轮廓宽度、纹路个数中的至少一种;根据第一朝向在m个不同朝向的音效指示图案中选择出与第一朝向对应的音效指示图案。
示例性的,第一终端中预存有音效强度和图案参数的第二对应关系,以及朝向和图案类型的第四对应关系。第一终端根据音效强度确定音效指示图案参数,根据第一朝向确定第一朝向上的音效指示图案的图案类型,在确定了图案参数和音效指示图案后,以第一虚拟对象为中心显示音效指示图案,该音效指示图案用于表示沿第一朝向上与音效强度呈负相关关系的位置区域中存在第二虚拟对象。
示例性的,第一朝向为图3所示的区域310,终端根据第一朝向确定音效指示图案是在区域310显示的音效指示图案,音效强度为第一强度,终端根据第一强度确定音效指示图案的图案参数为宽型轮廓,以第一虚拟对象为中心,在区域310显示轮廓宽度为宽型的音效指示图案。
可选的,第一终端接收到音效显示指令后,根据预设行为状态确定音效指示图案的图案类型。具体实施方式可参考图2和图9实施例,在此不做赘述。
可选的,第一终端检测到第一坐标附件显示有环境图像时,将音效指示图案覆盖在环境图像上显示。通过将音效指示图案覆盖在环境图像上显示,能够更突出的向用户显示音效指示图案,避免了环境图像的干扰。
综上所述,本申请实施例中,通过获取第一虚拟对象在第一朝向上的第二虚拟对象的坐标,计算第一虚拟对象和第二虚拟对象之间的第一距离,根据第一距离确定音效强度,根据音效强度,以第一虚拟对象为中心显示音效指示图案,由于音效指示图案用于表示与音效强度呈负相关关系的位置区域中存在第二虚拟对象,因此能够直观地显示第二虚拟对象和第一虚拟对象之间距离的远近,提高了虚拟环境的真实度。
可选的,本申请实施例中,通过判断第一虚拟对象和第二虚拟对象是否属于同一阵营,当第一虚拟对象和第二虚拟对象不属于同一阵营时,执行获取第二坐标的步骤,解决了第一终端在虚拟环境中以第一虚拟对象为中心显示音效指示图案时,显示同一阵营的第二虚拟对象的音效指示图案对用户造成干扰,提高了虚拟环境中的信息显示效率。
可选的,本申请实施例中,通过获取在候选虚拟对象中确定第二虚拟对象,根据第二虚拟对象和第一对象之间的第一距离确定音效强度,根据音效强度,以第一虚拟对象为中心显示音效指示图案,解决了在同一个朝向上显示多个候选虚拟对象的音效指示图案所导致的显示杂乱,干扰用户判断的问题,提高了虚拟环境中的信息显示效率。
可选的,本申请实施例中的第一终端根据第一朝向在多个不同朝向的音效指示图案中选择出与第一朝向对应的音效指示图案,由于音效指示图案是在有限个朝向的音效指示图案中选择确定的,解决了音效指示图案的指向过于精确的问题,提高了虚拟环境的模拟平衡性。
请参考图13,其示出了本申请一个示例性实施例提供的虚拟环境中的信息显示方法的方法流程图。该方法可应用于如图1所示的实施环境中的第一终端110中,所述方法包括:
步骤801,显示以俯视视角观察虚拟环境的第一显示画面,第一显示画面中显示有第一虚拟对象的对象模型。
示例性的,如图14所示,第一终端中显示的第一显示画面900是以俯视视角观察的虚拟环境画面,第一虚拟对象910的对象模型显示在第一显示画面900中。
步骤802,当虚拟环境中位于第一虚拟对象的第一朝向上存在处于预设行为状态的第二虚拟对象时,以第一虚拟对象为中心显示音效指示图案,音效指示图案指向第二虚拟对象所在位置的方向。
示例性的,如图14所示,虚拟环境900中,第一虚拟对象910的第一朝向上存在第二虚拟对象920时,以第一虚拟对象910所在位置为中心,显示音效指示图案930,该音效指示图案930指向第二虚拟对象920所在位置的方向。其中,音效指示图案的图案参数是根据第一虚拟对象910和第二虚拟对象920之间的第一距离R确定的,该图案参数包括尺寸、面积、轮廓宽度、纹路个数中的至少一种,计算第一虚拟对象910和第二虚拟对象920之间的第一距离R从而获取音效强度可参考图2和图9实施例中的描述,在此不做赘述。
确定音效指示图案的方法可参考上述图2和图9实施例,在此不做赘述。
综上所述,本申请实施例中,通过以第一虚拟对象为中心显示音效指示图案,该音效指示图案指向第二虚拟对象所在位置的方向,由于该音效指示图案的图案参数是根据第一虚拟对象和第二虚拟对象间的第一距离确定的,因此能够直观地显示第二虚拟对象和第一虚拟对象之间距离的远近,提高了虚拟环境的真实度。
请参考图15,其示出了其示出了本申请一个示例性实施例提供的虚拟环境中的信息显示方法的方法流程图。该方法可应用于如图1所示的实施环境中。
在该实施例中,终端存储有两组坐标数据,(x,y)和(X,Y)。其中,(x,y)是虚拟对象处于预设行为状态时的坐标,(X,Y)是虚拟对象处于任何行为状态时的坐标,即,当虚拟对象处于预设状态时,其对应的(x,y)和(X,Y)相同,当虚拟对象不处于预设状态时,终端只存储(X,Y),不存储(x,y)。在本实施例中,预设行为状态包括移动、攻击或释放技能中的至少一种。
终端每隔预定时间间隔检测虚拟对象是否在进行移动、攻击或释放技能等预设行为,若玩家在进行预设行为,则记录虚拟对象的坐标(x,y)和(X,Y);将坐标(x,y)和(X,Y)上传至服务器。
服务器将坐标(x,y)和(X,Y)在堆栈中以数组形式存放;根据堆栈中记录的坐标,计算任意两个坐标之间的第一距离,确定标定方向;首先判断第一距离是否在预设距离内,两个坐标对应的虚拟对象是否为同一阵营;若两个坐标在预设距离内且不是同一阵营,则判断两个坐标之间的第一距离是否在第一距离范围内,若是,则向终端下发指令,在标定方向播放强特效;若第一距离不在第一距离范围内,则判断第一距离是否在第二距离范围内,若是,则向终端下发指令,在标定方向播放中特效;若第一距离不在第二距离范围内,则判断第一距离是否在第三距离范围内,若是,则向终端下发指令,在标定方向播放弱特效,若否,则销毁任意两个坐标的数据。
本实施例中,特效即为上述实施例中的音效指示图案。服务器会将运算完成的坐标数据销毁,以缓解堆栈压力。
请参考图16,其示出了本申请一个示例性实施例提供的虚拟环境中的信息显示装置的结构框图。如图所示,该装置可应用于如图1所示的实施环境中的服务器130中,该装置包括获取模块1110、处理模块1120以及发送模块1130:
获取模块1110,用于获取第一虚拟对象在虚拟环境中的第一坐标;根据第一坐标,获取 位于第一虚拟对象的第一朝向上处于预设行为状态的第二虚拟对象的第二坐标。
处理模块1120,用于根据第一坐标和第二坐标计算第一虚拟对象和第二虚拟对象之间的第一距离;根据第一距离获取第二虚拟对象在第一朝向上的音效强度。
发送模块1130,用于向第一虚拟对象对应的第一终端发送音效显示指令,该音效显示指令用于指示第一终端在虚拟环境中以第一虚拟对象为中心显示音效指示图案,该音效指示图案用于表示沿第一朝向上与音效强度呈负相关关系的位置区域中存在第二虚拟对象。
在一个可选的实施例中,发音效显示指令包括音效强度,该音效强度用于指示第一终端根据音效强度确定音效指示图案的图案参数,该图案参数包括尺寸、面积、轮廓宽度、纹路个数中的至少一种。
在一个可选的实施例中,音效显示指令包括预设行为状态,预设行为状态用于指示第一终端根据预设行为状态确定音效指示图案的图案类型;和/或,音效显示指令包括第一朝向,第一朝向用于指示第一终端在m个不同朝向的音效指示图案中选择出与第一朝向对应的音效指示图案,m为自然数,m≥1。
在一个可选的实施例中,获取模块1110,还用于当第一虚拟对象和第二虚拟对象不属于同一阵营时,向第一终端发送音效显示指令。
在一个可选的实施例中,处理模块1120,还用于检测第一朝向上是否存在处于预设行为状态的其它虚拟对象;其它虚拟对象和第一虚拟对象不属于同一阵营;当第一朝向上不存在其它虚拟对象时,根据第一距离获取第二虚拟对象在第一朝向上的音效强度。
在一个可选的实施例中,处理模块1120,还用于确定第一距离在n个预设距离范围内所处的目标距离范围,n个预设距离范围是互不重叠且首尾相邻的距离取值范围,n为自然数,n≥2;根据目标距离范围确定第二虚拟对象在第一朝向上的音效强度。
在一个可选的实施例中,处理模块1120,还用于检测第一坐标为圆心,预设距离为半径的检测区域内是否存在处于预设行为状态的候选虚拟对象;当位于检测区域中的第一朝向上存在至少两个候选虚拟对象时,计算每个候选虚拟对象与第一虚拟对象之间的距离;将与第一虚拟对象最近的候选虚拟对象确定为第二虚拟对象。
在一个可选的实施例中,获取模块1110,还用于每隔预设时间间隔获取第一虚拟对象在虚拟环境中的第一坐标。
请参考图17,其示出了本申请一个示例性实施例提供的虚拟环境中的信息显示装置的结构框图。如图所示,该装置可应用于如图1所示的实施环境中的第一终端110中,该装置包括发送模块1210、接收模块1220以及显示模块1230:
发送模块1210,用于向服务器发送第一虚拟对象在虚拟环境中的第一坐标。
接收模块1220,用于接收服务器发送的音效显示指令;
显示模块1230,用于根据音效显示指令,在虚拟环境中以第一虚拟对象为中心显示音效指示图案,该音效指示图案用于表示沿第一虚拟对象的第一朝向上与音效强度呈负相关关系的位置区域中存在处于预设行为状态的第二虚拟对象。
其中,音效显示指令是服务器获取得到音效强度后发送的指令,音效强度是服务器根据第一虚拟对象和第二虚拟对象之间的第一距离获取的,第一距离是服务器根据第一坐标获取位于第一朝向上的第二虚拟对象的第二坐标后,根据第一坐标和第二坐标计算得到的。
在一个可选的实施例中,显示模块1230,还用于根据音效强度确定音效指示图案的图案 参数,图案参数包括尺寸、面积、轮廓宽度、纹路个数中的至少一种。
在一个可选的实施例中,音效显示指令还包括预设行为状态,和/或,第一朝向;
显示模块1230,还用于根据预设行为状态确定音效指示图案的图案类型;和/或,在m个不同朝向的音效指示图案中选择出与第一朝向对应的音效指示图案,m为自然数,m≥1。
在一个可选的实施例中,发送模块1210,还用于每隔预设时间间隔向服务器发送第一坐标。
请参考图18,其示出了本申请一个示例性实施例提供的计算机设备的结构框图。该计算机用于实施上述实施例中提供的服务器侧的虚拟环境中的信息显示方法,该计算机设备可以是图1实施例中的服务器130。具体来讲:
计算机设备1300包括中央处理单元(CPU)1301、包括随机存取存储器(RAM)1302和只读存储器(ROM)1303的系统存储器1304,以及连接系统存储器1304和中央处理单元1301的系统总线1305。计算机设备1300还包括帮助计算机内的各个器件之间传输信息的基本输入/输出系统(I/O系统)1306,和用于存储操作系统1313、应用程序1314和其他程序模块1315的大容量存储设备1307。
基本输入/输出系统1306包括有用于显示信息的显示器1308和用于用户输入信息的诸如鼠标、键盘之类的输入设备1309。其中显示器1308和输入设备1309都通过连接到系统总线1305的输入输出控制器1310连接到中央处理单元1301。基本输入/输出系统1306还可以包括输入输出控制器1310以用于接收和处理来自键盘、鼠标、或电子触控笔等多个其他设备的输入。类似地,输入输出控制器1310还提供输出到显示屏、打印机或其他类型的输出设备。
大容量存储设备1307通过连接到系统总线1305的大容量存储控制器(未示出)连接到中央处理单元1301。大容量存储设备1307及其相关联的计算机可读介质为计算机设备1300提供非易失性存储。也就是说,大容量存储设备1307可以包括诸如硬盘或者CD-ROM驱动器之类的计算机可读介质(未示出)。
不失一般性,计算机可读介质可以包括计算机存储介质和通信介质。计算机存储介质包括以用于存储诸如计算机可读指令、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质。计算机存储介质包括RAM、ROM、EPROM、EEPROM、闪存或其他固态存储其技术,CD-ROM、DVD或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。当然,本领域技术人员可知计算机存储介质不局限于上述几种。上述的系统存储器1304和大容量存储设备1307可以统称为存储器。
根据本发明的各种实施例,计算机设备1300还可以通过诸如因特网等网络连接到网络上的远程计算机运行。也即计算机设备1300可以通过连接在系统总线1305上的网络接口单元1311连接到网络1312,或者说,也可以使用网络接口单元1311来连接到其他类型的网络或远程计算机系统(未示出)。
存储器1304还包括一个或者一个以上的程序,上述一个或者一个以上程序存储于存储器1304中,且经配置以由一个或者一个以上处理器执行。上述一个或者一个以上程序包含用于进行上述实施例中提供的服务器侧的虚拟环境中的信息显示方法的指令。
图19示出了本发明一个示例性实施例提供的终端1400的结构框图。该终端1400可以是便携式移动终端,比如:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio  Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器。终端1400还可能被称为用户设备、便携式终端等其他名称。
通常,终端1400包括有:处理器1401和存储器1402。
处理器1401可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1401可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1401也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1401可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1401还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1402可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是有形的和非暂态的。存储器1402还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1402中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1401所执行以实现本申请中提供的虚拟环境中的信息显示方法。
在一些实施例中,终端1400还可选包括有:外围设备接口1403和至少一个外围设备。具体地,外围设备包括:射频电路1404、触摸显示屏1405、摄像头1406、音频电路1407、定位组件1408和电源1409中的至少一种。
外围设备接口1403可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1401和存储器1402。在一些实施例中,处理器1401、存储器1402和外围设备接口1403被集成在同一芯片或电路板上;在一些其他实施例中,处理器1401、存储器1402和外围设备接口1403中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路1404用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路1404通过电磁信号与通信网络以及其他通信设备进行通信。射频电路1404将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路1404包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路1404可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:万维网、城域网、内联网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路1404还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。
触摸显示屏1405用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。触摸显示屏1405还具有采集在触摸显示屏1405的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器1401进行处理。触摸显示屏1405用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,触摸显示屏1405可以为一个,设置终端1400的前面板;在另一些实施例中,触摸显示屏1405 可以为至少两个,分别设置在终端1400的不同表面或呈折叠设计;在再一些实施例中,触摸显示屏1405可以是柔性显示屏,设置在终端1400的弯曲表面上或折叠面上。甚至,触摸显示屏1405还可以设置成非矩形的不规则图形,也即异形屏。触摸显示屏1405可以采用LCD(Liquid Crystal Display,液晶显示器)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件1406用于采集图像或视频。可选地,摄像头组件1406包括前置摄像头和后置摄像头。通常,前置摄像头用于实现视频通话或自拍,后置摄像头用于实现照片或视频的拍摄。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能,主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能。在一些实施例中,摄像头组件1406还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。
音频电路1407用于提供用户和终端1400之间的音频接口。音频电路1407可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器1401进行处理,或者输入至射频电路1404以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端1400的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器1401或射频电路1404的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路1407还可以包括耳机插孔。
定位组件1408用于定位终端1400的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件1408可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统或俄罗斯的伽利略系统的定位组件。
电源1409用于为终端1400中的各个组件进行供电。电源1409可以是交流电、直流电、一次性电池或可充电电池。当电源1409包括可充电电池时,该可充电电池可以是有线充电电池或无线充电电池。有线充电电池是通过有线线路充电的电池,无线充电电池是通过无线线圈充电的电池。该可充电电池还可以用于支持快充技术。
在一些实施例中,终端1400还包括有一个或多个传感器1410。该一个或多个传感器1410包括但不限于:加速度传感器1411、陀螺仪传感器1412、压力传感器1413、指纹传感器1414、光学传感器1415以及接近传感器1416。
加速度传感器1411可以检测以终端1400建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器1411可以用于检测重力加速度在三个坐标轴上的分量。处理器1401可以根据加速度传感器1411采集的重力加速度信号,控制触摸显示屏1405以横向视图或纵向视图进行用户界面的显示。加速度传感器1411还可以用于游戏或者用户的运动数据的采集。
陀螺仪传感器1412可以检测终端1400的机体方向及转动角度,陀螺仪传感器1412可以与加速度传感器1411协同采集用户对终端1400的3D动作。处理器1401根据陀螺仪传感器1412采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器1413可以设置在终端1400的侧边框和/或触摸显示屏1405的下层。当压力传感器1413设置在终端1400的侧边框时,可以检测用户对终端1400的握持信号,根据该握 持信号进行左右手识别或快捷操作。当压力传感器1413设置在触摸显示屏1405的下层时,可以根据用户对触摸显示屏1405的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
指纹传感器1414用于采集用户的指纹,以根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器1401授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器1414可以被设置终端1400的正面、背面或侧面。当终端1400上设置有物理按键或厂商Logo时,指纹传感器1414可以与物理按键或厂商Logo集成在一起。
光学传感器1415用于采集环境光强度。在一个实施例中,处理器1401可以根据光学传感器1415采集的环境光强度,控制触摸显示屏1405的显示亮度。具体地,当环境光强度较高时,调高触摸显示屏1405的显示亮度;当环境光强度较低时,调低触摸显示屏1405的显示亮度。在另一个实施例中,处理器1401还可以根据光学传感器1415采集的环境光强度,动态调整摄像头组件1406的拍摄参数。
接近传感器1416,也称距离传感器,通常设置在终端1400的正面。接近传感器1416用于采集用户与终端1400的正面之间的距离。在一个实施例中,当接近传感器1416检测到用户与终端1400的正面之间的距离逐渐变小时,由处理器1401控制触摸显示屏1405从亮屏状态切换为息屏状态;当接近传感器1416检测到用户与终端1400的正面之间的距离逐渐变大时,由处理器1401控制触摸显示屏1405从息屏状态切换为亮屏状态。
本领域技术人员可以理解,图19中示出的结构并不构成对终端1400的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
本申请还提供一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现上述方法实施例提供的虚拟环境中的信息显示方法。
可选地,本申请还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述各方面所述的虚拟环境中的信息显示方法。
应当理解的是,在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (29)

  1. 一种虚拟环境中的信息显示方法,其特征在于,所述方法包括:
    获取所述第一虚拟对象在所述虚拟环境中的第一坐标;
    根据所述第一坐标,获取位于所述第一虚拟对象的第一朝向上处于预设行为状态的第二虚拟对象的第二坐标;
    根据所述第一坐标和所述第二坐标计算所述第一虚拟对象和所述第二虚拟对象之间的第一距离;
    根据所述第一距离获取所述第二虚拟对象在所述第一朝向上的音效强度;
    向所述第一虚拟对象对应的第一终端发送音效显示指令,所述音效显示指令用于指示所述第一终端在所述虚拟环境中以所述第一虚拟对象为中心显示音效指示图案,所述音效指示图案用于表示沿所述第一朝向上与所述音效强度呈负相关关系的位置区域中存在所述第二虚拟对象。
  2. 根据权利要求1所述的方法,其特征在于,所述音效显示指令包括所述音效强度,所述音效强度用于指示所述第一终端根据所述音效强度确定所述音效指示图案的图案参数,所述图案参数包括尺寸、面积、轮廓宽度、纹路个数中的至少一种。
  3. 根据权利要求2所述的方法,其特征在于,所述音效显示指令包括所述预设行为状态,所述预设行为状态用于指示所述第一终端根据所述预设行为状态确定所述音效指示图案的图案类型;
    和/或,
    所述音效显示指令包括所述第一朝向,所述第一朝向用于指示所述第一终端在m个不同朝向的音效指示图案中选择出与所述第一朝向对应的音效指示图案,m为自然数,m≥1。
  4. 根据权利要求1至3任一所述的方法,其特征在于,所述获取所述第一虚拟对象在所述虚拟环境中的第一坐标,包括:
    当所述第一虚拟对象和所述第二虚拟对象不属于同一阵营时,获取所述第一坐标。
  5. 根据权利要求1至3任一所述的方法,其特征在于,所述根据所述第一距离获取所述第二虚拟对象在所述第一朝向上的音效强度,包括:
    检测所述第一朝向上是否存在处于所述预设行为状态的其它虚拟对象;所述其它虚拟对象和所述第一虚拟对象不属于同一阵营;
    当所述第一朝向上不存在所述其它虚拟对象时,根据所述第一距离获取所述第二虚拟对象在所述第一朝向上的所述音效强度。
  6. 根据权利要求1至3任一所述的方法,其特征在于,所述根据所述第一距离获取所述第二虚拟对象在所述第一朝向上的音效强度,包括:
    确定所述第一距离在n个预设距离范围内所处的目标距离范围,所述n个预设距离范围是互不重叠且首尾相邻的距离取值范围,n为自然数,n≥2;
    根据所述目标距离范围确定所述第二虚拟对象在所述第一朝向上的音效强度。
  7. 根据权利要求1至3任一所述的方法,其特征在于,所述方法还包括:
    检测第一坐标为圆心,预设距离为半径的检测区域内是否存在处于所述预设行为状态的候选虚拟对象;
    当所述检测区域中的所述第一朝向上存在至少两个所述候选虚拟对象时,计算每个所述候选虚拟对象与所述第一虚拟对象之间的距离;
    将所述距离最小的候选虚拟对象确定为所述第二虚拟对象。
  8. 根据权利要求1至3任一项所述的方法,其特征在于,所述获取所述第一虚拟对象在所述虚拟环境中的第一坐标,包括:
    每隔预设时间间隔获取所述第一虚拟对象在所述虚拟环境中的第一坐标。
  9. 一种虚拟环境中的信息显示方法,其特征在于,所述方法包括:
    向服务器发送第一虚拟对象在所述虚拟环境中的第一坐标;
    接收所述服务器发送的音效显示指令;
    根据所述音效显示指令,在所述虚拟环境中以所述第一虚拟对象为中心显示音效指示图案,所述音效指示图案用于表示沿所述第一虚拟对象的第一朝向上与音效强度呈负相关关系的位置区域中存在处于预设行为状态的第二虚拟对象;
    其中,所述音效显示指令是所述服务器获取得到所述音效强度后发送的指令,所述音效强度是所述服务器根据所述第一虚拟对象和所述第二虚拟对象之间的第一距离获取的,所述第一距离是所述服务器根据所述第一坐标获取位于所述第一朝向上的所述第二虚拟对象的第二坐标后,根据所述第一坐标和所述第二坐标计算得到的。
  10. 根据权利要求9所述的方法,其特征在于,所述音效显示指令包括所述音效强度;
    所述根据所述音效显示指令,在所述虚拟环境中以所述第一虚拟对象为中心显示音效指示图案之前,还包括:
    根据所述音效强度确定所述音效指示图案的图案参数,所述图案参数包括尺寸、面积、轮廓宽度、纹路个数中的至少一种。
  11. 根据权利要求10所述的方法,其特征在于,所述音效显示指令还包括所述预设行为状态,和/或,所述第一朝向,所述方法还包括:
    根据所述预设行为状态确定所述音效指示图案的图案类型;
    和/或,
    在m个不同朝向的音效指示图案中选择出与所述第一朝向对应的音效指示图案,m为自然数,m≥1。
  12. 根据权利要求9至11任一项所述的方法,其特征在于,所述向服务器发送第一虚拟对象在所述虚拟环境中的第一坐标,包括:
    每隔预设时间间隔向所述服务器发送所述第一坐标。
  13. 一种虚拟环境中的信息显示方法,其特征在于,所述方法包括:
    显示以第一视角观察所述虚拟环境的第一显示画面,所述第一显示画面中显示有所述第一虚拟对象的对象模型;
    当所述虚拟环境中位于所述第一虚拟对象的第一朝向上存在处于预设行为状态的第二虚拟对象时,以所述第一虚拟对象为中心显示音效指示图案;所述音效指示图案指向所述第二虚拟对象所在位置的方向。
  14. 根据权利要求13所述的方法,其特征在于,所述显示以第一视角观察所述虚拟环境的第一显示画面,包括:
    显示以俯视视角观察所述第一显示画面。
  15. 根据权利要求14所述的方法,其特征在于,所述音效指示图案的图案参数是根据第一虚拟对象和第二虚拟对象之间的第一距离确定的,所述图案参数包括尺寸、面积、轮廓宽度、纹路个数中的至少一种。
  16. 一种虚拟环境中的信息显示装置,其特征在于,所述装置包括:
    获取模块,用于获取所述第一虚拟对象在所述虚拟环境中的第一坐标;根据所述第一坐标,获取位于所述第一虚拟对象的第一朝向上处于预设行为状态的第二虚拟对象的第二坐标;
    处理模块,用于根据所述第一坐标和所述第二坐标计算所述第一虚拟对象和所述第二虚拟对象之间的第一距离;根据所述第一距离获取所述第二虚拟对象在所述第一朝向上的音效强度;
    发送模块,用于向所述第一虚拟对象对应的第一终端发送音效显示指令,所述音效显示指令用于指示所述第一终端在所述虚拟环境中以所述第一虚拟对象为中心显示音效指示图案,所述音效指示图案用于表示沿所述第一朝向上与所述音效强度呈负相关关系的位置区域中存在所述第二虚拟对象。
  17. 根据权利要求16所述的装置,其特征在于,所述音效显示指令包括所述音效强度,所述音效强度用于指示所述第一终端根据所述音效强度确定所述音效指示图案的图案参数,所述图案参数包括尺寸、面积、轮廓宽度、纹路个数中的至少一种。
  18. 根据权利要求17所述的装置,其特征在于,所述音效显示指令包括所述预设行为状态,所述预设行为状态用于指示所述第一终端根据所述预设行为状态确定所述音效指示图案的图案类型;
    和/或,
    所述音效显示指令包括所述第一朝向,所述第一朝向用于指示所述第一终端在m个不同 朝向的音效指示图案中选择出与所述第一朝向对应的音效指示图案,m为自然数,m≥1。
  19. 根据权利要求16至18任一所述的装置,其特征在于,所述获取模块,还用于当所述第一虚拟对象和所述第二虚拟对象不属于同一阵营时,获取所述第一坐标。
  20. 根据权利要求16至18任一所述的装置,其特征在于,所述处理模块,还用于检测所述第一朝向上是否存在处于所述预设行为状态的其它虚拟对象;所述其它虚拟对象和所述第一虚拟对象不属于同一阵营;当所述第一朝向上不存在所述其它虚拟对象时,根据所述第一距离获取所述第二虚拟对象在所述第一朝向上的所述音效强度。
  21. 根据权利要求16至18任一所述的装置,其特征在于,所述处理模块,还用于确定所述第一距离在n个预设距离范围内所处的目标距离范围,所述n个预设距离范围是互不重叠且首尾相邻的距离取值范围,n为自然数,n≥2;根据所述目标距离范围确定所述第二虚拟对象在所述第一朝向上的音效强度。
  22. 根据权利要求16至18任一所述的装置,其特征在于,所述处理模块,还用于检测第一坐标为圆心,预设距离为半径的检测区域内是否存在处于所述预设行为状态的候选虚拟对象;当所述检测区域中的所述第一朝向上存在至少两个所述候选虚拟对象时,计算每个所述候选虚拟对象与所述第一虚拟对象之间的距离;将所述距离最小的候选虚拟对象确定为所述第二虚拟对象。
  23. 根据权利要求16至18任一项所述的装置,其特征在于,所述获取模块,还用于每隔预设时间间隔获取所述第一虚拟对象在所述虚拟环境中的第一坐标。
  24. 一种虚拟环境中的信息显示装置,其特征在于,所述装置包括:
    发送模块,用于向服务器发送第一虚拟对象在所述虚拟环境中的第一坐标;
    接收模块,用于接收所述服务器发送的音效显示指令;
    显示模块,用于根据所述音效显示指令,在所述虚拟环境中以所述第一虚拟对象为中心显示音效指示图案,所述音效指示图案用于表示沿所述第一虚拟对象的第一朝向上与音效强度呈负相关关系的位置区域中存在处于预设行为状态的第二虚拟对象;
    其中,所述音效显示指令是所述服务器获取得到所述音效强度后发送的指令,所述音效强度是所述服务器根据所述第一虚拟对象和所述第二虚拟对象之间的第一距离获取的,所述第一距离是所述服务器根据所述第一坐标获取位于所述第一朝向上的所述第二虚拟对象的第二坐标后,根据所述第一坐标和所述第二坐标计算得到的。
  25. 根据权利要求24所述的装置,其特征在于,所述音效显示指令包括所述音效强度;
    所述显示模块,还用于根据所述音效强度确定所述音效指示图案的图案参数,所述图案参数包括尺寸、面积、轮廓宽度、纹路个数中的至少一种。
  26. 根据权利要求25所述的装置,其特征在于,所述音效显示指令包括所述预设行为状态,和/或,所述第一朝向;
    所述显示模块,还用于根据所述预设行为状态确定所述音效指示图案的图案类型;和/或,在m个不同朝向的音效指示图案中选择出与所述第一朝向对应的音效指示图案,m为自然数,m≥1。
  27. 根据权利要求24至26任一项所述的装置,其特征在于,所述发送模块,还用于每隔预设时间间隔向所述服务器发送所述第一坐标。
  28. 一种电子设备,其特征在于,所述电子设备包括处理器和存储器,所述存储器中存储有至少一条指令,所述至少一条指令由所述处理器加载并执行以实现如权利要求1至15任一所述的虚拟环境中的信息显示方法。
  29. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有至少一条指令,至少一条指令由处理器加载并执行以实现权利要求1至15任一所述的虚拟环境中的信息显示方法。
PCT/CN2019/080125 2018-04-27 2019-03-28 虚拟环境中的信息显示方法、装置、设备及存储介质 WO2019205881A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/904,884 US11458395B2 (en) 2018-04-27 2020-06-18 Method for displaying information in a virtual environment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810391238.X 2018-04-27
CN201810391238.XA CN108579084A (zh) 2018-04-27 2018-04-27 虚拟环境中的信息显示方法、装置、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/904,884 Continuation US11458395B2 (en) 2018-04-27 2020-06-18 Method for displaying information in a virtual environment

Publications (1)

Publication Number Publication Date
WO2019205881A1 true WO2019205881A1 (zh) 2019-10-31

Family

ID=63610616

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/080125 WO2019205881A1 (zh) 2018-04-27 2019-03-28 虚拟环境中的信息显示方法、装置、设备及存储介质

Country Status (3)

Country Link
US (1) US11458395B2 (zh)
CN (1) CN108579084A (zh)
WO (1) WO2019205881A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108579084A (zh) * 2018-04-27 2018-09-28 腾讯科技(深圳)有限公司 虚拟环境中的信息显示方法、装置、设备及存储介质
CN108939535B (zh) * 2018-06-25 2022-02-15 网易(杭州)网络有限公司 虚拟场景的音效控制方法及装置、存储介质、电子设备
CN109529335B (zh) * 2018-11-06 2022-05-20 Oppo广东移动通信有限公司 游戏角色音效处理方法、装置、移动终端及存储介质
CN109876438B (zh) * 2019-02-20 2021-06-18 腾讯科技(深圳)有限公司 用户界面显示方法、装置、设备及存储介质
CN112565165B (zh) * 2019-09-26 2022-03-29 北京外号信息技术有限公司 基于光通信装置的交互方法和系统
CN110371051B (zh) * 2019-07-22 2021-06-04 广州小鹏汽车科技有限公司 一种车载娱乐的提示音播放方法和装置
CN110465088A (zh) * 2019-08-07 2019-11-19 上海欧皇网络科技有限公司 一种游戏角色的控制方法及装置
CN110538456B (zh) * 2019-09-09 2023-08-08 珠海金山数字网络科技有限公司 一种虚拟环境中的音源设置方法、装置、设备及存储介质
CN111228802B (zh) * 2020-01-15 2022-04-26 腾讯科技(深圳)有限公司 信息提示方法和装置、存储介质及电子装置
CN112704876B (zh) * 2020-12-30 2022-10-04 腾讯科技(深圳)有限公司 虚拟对象互动模式的选择方法、装置、设备及存储介质
CN112675544B (zh) * 2020-12-30 2023-03-17 腾讯科技(深圳)有限公司 虚拟道具的获取方法、装置、设备及介质
CN118179018A (zh) * 2021-04-28 2024-06-14 网易(杭州)网络有限公司 信息处理方法、装置、存储介质及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012029096A (ja) * 2010-07-23 2012-02-09 Nec Casio Mobile Communications Ltd 音声出力装置
US20130130795A1 (en) * 2011-07-08 2013-05-23 Kazuha HAYASHI Game machine, a storage medium storing a computer program used thereof, and control method
US20160150314A1 (en) * 2014-11-26 2016-05-26 Sony Computer Entertainment Inc. Information processing device, information processing system, control method, and program
CN107469354A (zh) * 2017-08-30 2017-12-15 网易(杭州)网络有限公司 补偿声音信息的视觉方法及装置、存储介质、电子设备
CN107890673A (zh) * 2017-09-30 2018-04-10 网易(杭州)网络有限公司 补偿声音信息的视觉显示方法及装置、存储介质、设备
CN108579084A (zh) * 2018-04-27 2018-09-28 腾讯科技(深圳)有限公司 虚拟环境中的信息显示方法、装置、设备及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090318773A1 (en) * 2008-06-24 2009-12-24 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Involuntary-response-dependent consequences
JP6065370B2 (ja) * 2012-02-03 2017-01-25 ソニー株式会社 情報処理装置、情報処理方法、及びプログラム
JP5966510B2 (ja) * 2012-03-29 2016-08-10 ソニー株式会社 情報処理システム
KR20160026317A (ko) * 2014-08-29 2016-03-09 삼성전자주식회사 음성 녹음 방법 및 장치
KR102102761B1 (ko) * 2017-01-17 2020-04-21 엘지전자 주식회사 차량용 사용자 인터페이스 장치 및 차량
US11214384B2 (en) * 2017-03-06 2022-01-04 Textron Innovations, Inc. Hexagonal floor and ceiling system for a vehicle
IT201800003194A1 (it) * 2018-03-01 2019-09-01 Image Studio Consulting S R L Interazioni di giostra

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012029096A (ja) * 2010-07-23 2012-02-09 Nec Casio Mobile Communications Ltd 音声出力装置
US20130130795A1 (en) * 2011-07-08 2013-05-23 Kazuha HAYASHI Game machine, a storage medium storing a computer program used thereof, and control method
US20160150314A1 (en) * 2014-11-26 2016-05-26 Sony Computer Entertainment Inc. Information processing device, information processing system, control method, and program
CN107469354A (zh) * 2017-08-30 2017-12-15 网易(杭州)网络有限公司 补偿声音信息的视觉方法及装置、存储介质、电子设备
CN107890673A (zh) * 2017-09-30 2018-04-10 网易(杭州)网络有限公司 补偿声音信息的视觉显示方法及装置、存储介质、设备
CN108579084A (zh) * 2018-04-27 2018-09-28 腾讯科技(深圳)有限公司 虚拟环境中的信息显示方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN108579084A (zh) 2018-09-28
US20200316472A1 (en) 2020-10-08
US11458395B2 (en) 2022-10-04

Similar Documents

Publication Publication Date Title
WO2019205881A1 (zh) 虚拟环境中的信息显示方法、装置、设备及存储介质
CN108619721B (zh) 虚拟场景中的距离信息显示方法、装置及计算机设备
US11703993B2 (en) Method, apparatus and device for view switching of virtual environment, and storage medium
CN108710525B (zh) 虚拟场景中的地图展示方法、装置、设备及存储介质
CN109529319B (zh) 界面控件的显示方法、设备及存储介质
CN108245893B (zh) 三维虚拟环境中虚拟对象的姿态确定方法、装置及介质
CN111414080B (zh) 虚拟对象的位置显示方法、装置、设备及存储介质
CN110045827B (zh) 虚拟环境中虚拟物品的观察方法、装置及可读存储介质
CN108671543A (zh) 虚拟场景中的标记元素显示方法、计算机设备及存储介质
CN111589142A (zh) 虚拟对象的控制方法、装置、设备及介质
CN112245912B (zh) 虚拟场景中的声音提示方法、装置、设备及存储介质
CN110585704B (zh) 虚拟场景中的对象提示方法、装置、设备及存储介质
CN111273780B (zh) 基于虚拟环境的动画播放方法、装置、设备及存储介质
CN110738738B (zh) 三维虚拟场景中的虚拟对象标记方法、设备及存储介质
CN113041620B (zh) 显示位置标记的方法、装置、设备及存储介质
US20220291791A1 (en) Method and apparatus for determining selected target, device, and storage medium
CN109806583B (zh) 用户界面显示方法、装置、设备及系统
WO2022237076A1 (zh) 虚拟对象的控制方法、装置、设备及计算机可读存储介质
CN111672115B (zh) 虚拟对象控制方法、装置、计算机设备及存储介质
CN111035929B (zh) 基于虚拟环境的淘汰信息反馈方法、装置、设备及介质
CN112604302A (zh) 虚拟环境中虚拟对象的互动方法、装置、设备及存储介质
US11865449B2 (en) Virtual object control method, apparatus, device, and computer-readable storage medium
CN113633976A (zh) 操作控制方法、装置、设备及计算机可读存储介质
CN113318443A (zh) 基于虚拟环境的侦察方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19794079

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19794079

Country of ref document: EP

Kind code of ref document: A1