WO2023226569A1 - 虚拟场景中的消息处理方法、装置、电子设备及计算机可读存储介质及计算机程序产品 - Google Patents

虚拟场景中的消息处理方法、装置、电子设备及计算机可读存储介质及计算机程序产品 Download PDF

Info

Publication number
WO2023226569A1
WO2023226569A1 PCT/CN2023/083259 CN2023083259W WO2023226569A1 WO 2023226569 A1 WO2023226569 A1 WO 2023226569A1 CN 2023083259 W CN2023083259 W CN 2023083259W WO 2023226569 A1 WO2023226569 A1 WO 2023226569A1
Authority
WO
WIPO (PCT)
Prior art keywords
message
virtual object
virtual
map
control
Prior art date
Application number
PCT/CN2023/083259
Other languages
English (en)
French (fr)
Other versions
WO2023226569A9 (zh
Inventor
叶成豪
王子奕
崔维健
韩帅
吴胜宇
何晶晶
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023226569A1 publication Critical patent/WO2023226569A1/zh
Publication of WO2023226569A9 publication Critical patent/WO2023226569A9/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5378Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/87Communicating with other players during game play, e.g. by e-mail or chat

Definitions

  • the present application relates to computer technology, and in particular to a message processing method, device, electronic equipment, computer-readable storage medium and computer program product in a virtual scene.
  • Display technology based on graphics processing hardware expands the channels for perceiving the environment and obtaining information, especially the display technology of virtual scenes, which can realize diversified interactions between virtual objects controlled by users or artificial intelligence according to actual application requirements. It has various typical application scenarios, such as in virtual scenes such as games, and can simulate the real battle process between virtual objects.
  • the ways for users to communicate with other users include: voice communication, intra-office quick punctuation, quick messages, etc. These messages are usually sent to everyone in the same camp or team, and some teammates will be disturbed by messages that have nothing to do with them. Although efficient communication can be achieved through voice for specific teammates, some users' terminal devices may not be equipped with hardware or software devices related to voice communication. The relevant technology has not yet proposed a solution for efficiently sending messages to a specific user.
  • Embodiments of the present application provide a message processing method, device, electronic equipment, computer-readable storage media and computer program products in a virtual scene, which can efficiently send point-to-point messages in a virtual scene, thereby eliminating the impact of messages on irrelevant users. interference caused.
  • the embodiment of this application provides a message processing method in a virtual scene, including:
  • map interface corresponding to the first virtual object display a map of at least a partial area of the virtual scene
  • a location mark control used to represent a first location where the second virtual object is currently located is displayed in the map, wherein the second virtual object is Any virtual object belonging to the same camp as the first virtual object;
  • An embodiment of the present application provides a message processing device in a virtual scene, including:
  • a display module configured to display a map of at least part of the virtual scene in the map interface corresponding to the first virtual object
  • the display module is further configured to, in response to the appearance of at least one second virtual object in the partial area, display a location mark control in the map that is used to represent the first location where the second virtual object is currently located, wherein , the second virtual object is any virtual object belonging to the same camp as the first virtual object;
  • a message sending module configured to move the position mark control from the first position to a second position in response to a movement operation for the position mark control, and send a message to the second virtual object, wherein: The message is used to instruct the second virtual object to arrive at the second location and execute instructions.
  • An embodiment of the present application provides an electronic device, which includes:
  • Memory used to store executable instructions
  • the processor is configured to implement the message processing method in the virtual scene provided by the embodiment of the present application when executing the executable instructions stored in the memory.
  • Embodiments of the present application provide a computer-readable storage medium that stores executable instructions.
  • the executable instructions are executed by a processor, the message processing method in a virtual scene provided by the embodiments of the present application is implemented.
  • An embodiment of the present application provides a computer program product, which includes a computer program or instructions.
  • the computer program or instructions are executed by a processor, the message processing method in a virtual scene provided by the embodiment of the present application is implemented.
  • Reusing the location mark control in the map interface of the virtual scene to send messages compared with setting up new controls in the human-computer interaction interface for message sending, simplifies the interaction logic of the virtual scene, improves operating efficiency, and does not require the use of radio Devices (such as microphones) can be used to send point-to-point messages, saving computing resources required for virtual scenes.
  • radio Devices such as microphones
  • Figure 1A is a schematic diagram of the application mode of the message processing method in the virtual scene provided by the embodiment of the present application;
  • Figure 1B is a schematic diagram of the application mode of the message processing method in the virtual scene provided by the embodiment of the present application;
  • Figure 2 is a schematic structural diagram of a terminal device 400 provided by an embodiment of the present application.
  • FIGS 3A to 3F are schematic flow charts of the message processing method in the virtual scene provided by the embodiment of the present application.
  • Figure 4A is a schematic diagram of a map interface displayed in a virtual scene interface provided by an embodiment of the present application.
  • Figure 4B is a schematic diagram of a map interface independent of the virtual scene interface provided by an embodiment of the present application.
  • FIGS 5A to 5F are schematic map diagrams of the message processing method in the virtual scene provided by the embodiment of the present application.
  • Figures 6A to 6G are schematic map diagrams of the message processing method in the virtual scene provided by the embodiment of the present application.
  • Figure 7A is a schematic diagram of the arrangement of command controls provided by the embodiment of the present application.
  • Figure 7B is a schematic diagram of the virtual scene interface corresponding to the second virtual object provided by the embodiment of the present application.
  • Figure 8 is an optional flow diagram of a message processing method in a virtual scene provided by an embodiment of the present application.
  • first ⁇ second ⁇ third are only used to distinguish similar objects and do not represent a specific ordering of objects. It is understandable that “first ⁇ second ⁇ third” is used in Where appropriate, the specific order or sequence may be interchanged so that the embodiments of the application described herein can be implemented in an order other than that illustrated or described herein.
  • embodiments of this application involve user information, user feedback data and other related data.
  • user permission or consent needs to be obtained, and the collection of relevant data, Use and processing need to comply with relevant laws, regulations and standards of relevant countries and regions.
  • Virtual scenes using the scenes output by the device that are different from the real world, can form a visual perception of the virtual scene through the naked eye or the assistance of the device, such as two-dimensional images output through the display screen, through stereoscopic projection, virtual reality and augmented reality Three-dimensional images output by stereoscopic display technology such as technology; in addition, various simulated real-world perceptions such as auditory perception, tactile perception, olfactory perception, and motion perception can also be formed through various possible hardware.
  • Response is used to represent the conditions or states on which the performed operations depend.
  • the dependent conditions or states are met, the one or more operations performed may be in real time or may have a set delay; Unless otherwise specified, there is no restriction on the execution order of the multiple operations performed.
  • Virtual objects objects that interact in virtual scenes, are controlled by users or robot programs (for example, robot programs based on artificial intelligence), and can be still, move, and perform various behaviors in virtual scenes, such as in games. various roles, etc.
  • robot programs for example, robot programs based on artificial intelligence
  • Map used to display the terrain of at least part of the virtual scene and various elements of the surface (for example: buildings, virtual vehicles, virtual objects).
  • Point-to-point messages messages sent from one terminal device to another terminal device in a point-to-point manner.
  • Embodiments of the present application provide a message processing method in a virtual scene, a message processing device in a virtual scene, electronic equipment, a computer-readable storage medium, and a computer program product, which can efficiently send point-to-point messages in a virtual scene, thereby Eliminate the interference caused by messages to irrelevant users.
  • the electronic device provided by the embodiment of the present application can be implemented as a notebook computer, a tablet computer, a desktop computer, a set-top box, and a mobile device (for example, a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device, a vehicle-mounted terminal)
  • a mobile device for example, a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device, a vehicle-mounted terminal.
  • Various types of user terminals can also be implemented as servers.
  • FIG. 1A is a schematic diagram of the application mode of the message processing method in a virtual scene provided by an embodiment of the present application. It is suitable for some virtual scenarios that completely rely on the computing power of the graphics processing hardware of the terminal device 400.
  • the application mode of scene-related data calculation such as a stand-alone version or offline mode game, completes the output of the virtual scene through various different types of terminal devices 400 such as smartphones, tablets, virtual reality devices, and augmented reality devices.
  • graphics processing hardware examples include central processing units (CPU, Central Processing Unit) and graphics processing units (GPU, Graphics Processing Unit).
  • CPU central processing units
  • GPU Graphics Processing Unit
  • the terminal device 400 runs a client 401 (for example, a stand-alone version of a game application). During the running process of the client 401, a virtual scene including role-playing is output.
  • the virtual scene may be for game characters to interact with.
  • the environment for example, can be plains, streets, valleys, etc.
  • the first virtual object can be a game character controlled by the user, that is, the first virtual object is controlled by a real user and will respond to The real user moves in the virtual scene in response to the operation of the controller (such as a touch screen, voice-activated switch, keyboard, mouse and joystick, etc.). For example, when the real user moves the joystick to the right, the first virtual object will move in the virtual scene.
  • the controller such as a touch screen, voice-activated switch, keyboard, mouse and joystick, etc.
  • the second virtual object is a virtual object of the same camp as the first virtual object.
  • the map interface is displayed in part of the virtual scene interface in the form of a floating layer, or the map interface of the virtual scene is displayed on an interface independent of the virtual scene interface.
  • the first virtual object may be a user-controlled virtual object
  • the client 401 displays the map 102 of at least a partial area of the virtual scene 101 in the map interface corresponding to the first virtual object; in response to the At least one second virtual object (in the same camp as the first virtual object) displays a location mark control on the map that is used to represent the first location where the second virtual object is currently located.
  • the position mark control is moved from the first position to the second position, and a message is sent to the second virtual object, wherein the message carries the second position and the instruction, and the message is used to indicate the second virtual object.
  • the object reaches the second location and executes the instructions.
  • FIG. 1B is a schematic diagram of the application mode of the message processing method in a virtual scene provided by the embodiment of the present application. It is applied to the terminal device 400 and the server 200 and is suitable for applications that rely on the computing capabilities of the server 200 The virtual scene calculation is completed and the application mode of the virtual scene is output on the terminal device 400 .
  • the server 200 calculates the virtual scene-related display data (such as scene data) and sends it to the terminal device 400 through the network 300.
  • the terminal device 400 relies on the graphics computing hardware to complete the loading, calculation and display data. Parsing and rendering rely on graphics output hardware to output virtual scenes to form visual perceptions. For example, two-dimensional video frames can be presented on the display screen of a smartphone, or projected on the lenses of augmented reality/virtual reality glasses to achieve a three-dimensional display effect. Video frames; for perception in the form of virtual scenes, it can be understood that corresponding hardware output of the terminal device 400 can be used, such as using a microphone to form auditory perception, using a vibrator to form tactile perception, and so on.
  • the terminal device 400 runs a client 401 (for example, a network version of a game application), and interacts with other users by connecting to the server 200 (for example, a game server).
  • the terminal device 400 outputs the virtual scene 101 of the client 401.
  • a first virtual object and a launching prop for example, a shooting prop or a throwing prop held by the first virtual object through a holding part (for example, a hand) are displayed in the virtual scene, where the first virtual object may be a user-controlled object.
  • the controlled game character that is, the first virtual object is controlled by a real user and will move in the virtual scene in response to the real user's operation of the controller (such as a touch screen, voice-activated switch, keyboard, mouse, joystick, etc.), For example, when the real user moves the joystick to the right, the first virtual object will move to the right in the virtual scene. The user can also stay stationary, jump, and control the first virtual object to perform shooting operations.
  • the second virtual object is a virtual object of the same camp as the first virtual object.
  • the map interface is displayed in some areas of the virtual scene interface in the form of a floating layer, or the map interface of the virtual scene is displayed on an interface independent of the virtual scene interface.
  • the map 102 in FIG. 1A is displayed in the virtual scene 101 in the form of a floating layer.
  • the first virtual object may be a user-controlled virtual object
  • the client 401 displays the map 102 of at least a partial area of the virtual scene 101 in the map interface corresponding to the first virtual object; in response to the At least one second virtual object (in the same camp as the first virtual object) displays a location mark control on the map that is used to represent the first location where the second virtual object is currently located.
  • the position mark control is moved from the first position to the second position, and a message is sent to the second virtual object, wherein the message carries the second position and the instruction, and the message is used to indicate the second virtual object.
  • the object reaches the second location and executes the instructions.
  • the terminal device 400 runs an application program that supports virtual scenes.
  • the application can be a first-person shooting game (FPS, First-Person Shooting game), a third-person shooting game Any of a game, a virtual reality application, a 3D mapping program, or a multiplayer survival game.
  • the user uses the terminal device 400 to operate virtual objects located in the virtual scene to perform activities, which activities include but are not limited to: adjusting body posture, crawling, walking, running, riding, jumping, driving, picking up, shooting, attacking, throwing, building virtual At least one of the buildings.
  • the virtual object may be a virtual character, such as a simulated character or an animation character.
  • Cloud Technology refers to the unification of a series of resources such as hardware, software, and networks within a wide area network or a local area network to realize data calculation and storage.
  • Cloud technology is a general term for network technology, information technology, integration technology, management platform technology, and application technology based on the cloud computing business model. It can form a resource pool and use it on demand, which is flexible and convenient. Cloud computing technology will become an important support. The background services of technical network systems require a large amount of computing and storage resources.
  • Cloud gaming also known as gaming on demand, is an online gaming technology based on cloud computing technology. Cloud gaming technology enables thin clients with relatively limited graphics processing and data computing capabilities to run high-quality games.
  • the game is not run on the player's game terminal, but runs on the cloud server, and the cloud server renders the game scene into a video and audio stream, which is transmitted to the player's game terminal through the network.
  • Player game terminals do not need to have powerful graphics computing and data processing capabilities. They only need to have basic streaming media playback capabilities and the ability to obtain player input instructions and send them to the cloud server.
  • the server 200 in Figure 1B can be an independent physical server, a server cluster or a distributed system composed of multiple physical servers, or it can provide cloud services, cloud databases, cloud computing, cloud functions, and cloud storage. , network services, cloud communications, middleware services, domain name services, security services, content delivery network (Content Delivery Network, CDN), and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
  • the terminal device 400 and the server 200 can be connected directly or indirectly through wired or wireless communication methods, which are not limited in the embodiments of this application.
  • FIG. 1A is a schematic structural diagram of a terminal device 400 provided by an embodiment of the present application.
  • the terminal device 400 shown in Figure 2 includes: at least one processor 410, a memory 450, at least one network interface 420 and a user interface 430.
  • the various components in the terminal device 400 are coupled together via a bus system 440 .
  • the bus system 440 is used to implement connection communication between these components.
  • the bus system 440 also includes a power bus, a control bus, and a status signal bus.
  • the various buses are labeled bus system 440 in FIG. 2 .
  • the processor 410 may be an integrated circuit chip with signal processing capabilities, such as a general-purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gate or transistor logic devices, or discrete hardware Components, etc., wherein the general processor can be a microprocessor or any conventional processor, etc.
  • DSP Digital Signal Processor
  • User interface 430 includes one or more output devices 431 that enable the presentation of media content, including one or more speakers and/or one or more visual displays.
  • User interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, and other input buttons and controls.
  • Memory 450 may be removable, non-removable, or a combination thereof.
  • Memory 450 includes volatile memory or non-volatile memory, and may include both volatile and non-volatile memory.
  • the memory 450 is capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplarily described below.
  • the operating system 451 includes system programs used to process various basic system services and perform hardware-related tasks, such as the framework layer, core library layer, driver layer, etc., which are used to implement various basic services and process hardware-based tasks;
  • Network communication module 452 for reaching other computers via one or more (wired or wireless) network interfaces 420
  • exemplary network interface 420 includes: Bluetooth, Wireless Compatibility Certification (WiFi), and Universal Serial Bus (USB, Universal Serial Bus), etc.;
  • Presentation module 453 for enabling the presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 431 (e.g., display screens, speakers, etc.) associated with user interface 430 );
  • information e.g., a user interface for operating peripheral devices and displaying content and information
  • output devices 431 e.g., display screens, speakers, etc.
  • An input processing module 454 for detecting one or more user inputs or interactions from one or more input devices 432 and translating the detected inputs or interactions.
  • the message processing device in the virtual scene provided by the embodiment of the present application can be implemented in software.
  • Figure 2 shows the message processing device 455 in the virtual scene stored in the memory 450, which can be a program and Software in the form of plug-ins, etc., includes the following software modules: display module 4551, message sending module 4552. These modules are logical, so they can be combined or further split at will according to the functions implemented.
  • Figure 3A is a schematic flowchart of a message processing method in a virtual scene provided by an embodiment of the present application, which will be described in conjunction with the steps shown in Figure 3A.
  • step 301A a map of at least a partial area of the virtual scene is displayed in the map interface corresponding to the first virtual object.
  • the map is a preview of the entire area of the virtual scene, or the map is a preview of a partial area of the virtual scene, where the partial area is an area radiating outward with the first virtual object as the center.
  • the first virtual object is a virtual object corresponding to the user as an example.
  • the second virtual object is another virtual object in the same camp as the first virtual object.
  • the second virtual object can be controlled by other users or controlled by Artificial intelligence control is explained in the embodiment of this application by taking the second virtual object controlled by another user as an example.
  • the first virtual object is the virtual object that sends the message
  • the second virtual object is the virtual object that receives the message.
  • Figure 5A is a map schematic diagram of the message processing method in the virtual scene provided by the embodiment of the present application
  • the map 501A is a browsing map of the entire area of the virtual scene
  • a map zoom control 503A is provided on the outer edge of the map 501A.
  • the map zoom control 503A is used to adjust the ratio between the map and the virtual scene. Move the circular icon of the map zoom control 503A toward the plus sign 504A to enlarge the map, and conversely, move the circular icon toward the minus sign 505A to shrink the map.
  • the position mark control X2 is a position mark control of the first virtual object.
  • the position mark control X2 displays a line segment that represents the direction of vision of the first virtual object in the virtual scene.
  • the number 2 represents the number of the first virtual object in the team or camp. 2. Numbering is used to distinguish the position mark controls of different virtual objects in the same camp.
  • the position mark control X3 is the second virtual object labeled 3.
  • the map interface can also be displayed in any of the following ways:
  • the map interface may be continuously displayed, or the map interface may be displayed in response to a calling operation on the map interface; and the map interface may be hidden in response to a withdrawal operation on the map interface.
  • the map 102 is continuously displayed in the upper right corner of the virtual scene 101 in a floating layer.
  • Figure 4A is a schematic diagram of a map interface displayed in a virtual scene interface provided by an embodiment of the present application.
  • the map interface 402A in response to an outgoing operation on the map interface 402A (for example, clicking a shortcut key corresponding to the map interface), the map interface 402A is displayed in the virtual scene interface 401A as a floating layer.
  • Figure 4B is a schematic diagram of a map interface that is independent of the virtual scene interface provided by an embodiment of the present application.
  • the map interface 402B and the virtual scene interface 401B respectively correspond to different tab pages, and the map interface 402B is displayed independently of the virtual scene interface 401B.
  • the tab page is displayed independently of the virtual scene.
  • the map interface 402B can also be independently displayed in other ways.
  • step 302A in response to the appearance of at least one second virtual object in the partial area, a location mark control used to represent the first location where the second virtual object is currently located is displayed on the map.
  • the second virtual object is any virtual object that belongs to the same camp as the first virtual object.
  • the position mark control of the virtual object moves synchronously in the map as the virtual object moves in the virtual scene; in addition to displaying the position mark control of the virtual object, the map can also display marked points and virtual vehicles.
  • Position mark control. Markers are points at fixed locations on the map.
  • marking points can also be generated in the map in the following manner: in response to a triggering operation on the first marking control in the map, the display enters the place marking mode, in response to a click operation on the map, on the map The click position displays the first custom marker point; in response to the trigger operation for the second marker control in the map, the second custom marker point is displayed at the first location where the first virtual object is currently located on the map.
  • the first custom marker point is used to be displayed simultaneously in the map interface corresponding to the second virtual object.
  • the second custom marker point is used for synchronous display in the map interface corresponding to the second virtual object.
  • the map marker mode can be displayed in any of the following ways: text prompts to enter the map marker mode, the background color of the map is switched to another color, and the grid lines in the map used as location reference are highlighted.
  • the custom marker points are synchronously displayed in the map interface of the second virtual object, enabling teammates to share location information corresponding to the marker points, and facilitating teamwork among teammates in the same camp based on different location markers.
  • the marker point can be used as a reference point for different locations on the map, and the user can drag the location marker control to the desired location based on the reference point to improve the accuracy of the second location carried in the message.
  • Figure 5D is a schematic map diagram of a message processing method in a virtual scene provided by an embodiment of the present application; in the map 501A, the first mark control 501D and the second mark control 502D are respectively displayed on the inner edge of the map.
  • the location marking mode can be entered by triggering the first marking control 501D (refer to Figures 5A and 5D, the map 501A in Figure 5D is displayed in other colors compared to the map 501A in Figure 5A).
  • the location marking mode in response to the Clicking on any location on the map will display the first custom marker point at the clicked location on the map. For example: the first custom marker point D1.
  • the second marker control 502D is triggered, the second custom marker point D2 is displayed at the position where the position marker control X2 of the first virtual object is located.
  • the second virtual object is a teammate of the first virtual object
  • the custom marking points are used to display synchronously in the map interface corresponding to the second virtual object, that is, the user's self-marked points on his own map are displayed simultaneously.
  • Each user on the same team can view the custom marker point on their corresponding map, realizing the sharing of marker points and improving interaction efficiency.
  • the position mark control corresponding to the virtual vehicle can also be displayed in the following manner: in response to the appearance of at least one virtual vehicle (for example, a car, a motorcycle, an aircraft, etc.) in a partial area, display it on the map for representation The position mark control of the virtual vehicle is in the second position, wherein the mark type of the position mark control of the virtual vehicle is the virtual vehicle position mark.
  • at least one virtual vehicle for example, a car, a motorcycle, an aircraft, etc.
  • a virtual vehicle is a prop used to carry virtual objects in a virtual scene.
  • a picture of the virtual vehicle carrying a virtual object moving is displayed in the virtual scene.
  • the position mark control of the virtual vehicle in the map moves as the position of the virtual vehicle changes in the virtual scene.
  • FIG. 6A is a schematic diagram of a message processing method in a virtual scene provided by an embodiment of the present application.
  • the position mark control Z1 of the virtual vehicle is displayed at an adjacent position to the position mark control X2 of the first virtual object. If the first virtual object drives the virtual vehicle corresponding to the position mark control Z1, the position mark control Z1 and the position mark control X2 are superimposed and displayed, and the position mark control X2 and the position mark control Z1 move synchronously.
  • step 303A in response to the move operation for the position mark control, the position mark control is moved from the first position to the second position.
  • Figure 3B is message processing in a virtual scene provided by an embodiment of the present application. Flowchart of the method. Step 303A can be implemented through steps 3031B to 3033B, which will be described in detail below.
  • step 3031B in response to the duration of the pressing operation on the position mark control reaching the pressing duration threshold, the position mark control corresponding to the pressing operation is displayed in the enlarged mode.
  • the magnification mode is to resize the position mark control by a preset multiple compared to the original size, and the preset multiple is greater than 1, such as 1.2 times the original size.
  • FIG. 5B is a schematic diagram of a message processing method in a virtual scene provided by an embodiment of the present application.
  • the position mark control X3 in FIG. 5B is displayed in an enlarged mode, which is larger than the position mark control X3 displayed in the original size in FIG. 5A .
  • the hand shape represents the pressing operation for the position mark control X3.
  • the pressing duration threshold can be 0.5 seconds or less. When the duration of the pressing operation reaches the pressing duration threshold, the position marking control Marker control X3 can be moved.
  • step 3032B in response to the movement operation for the position mark control, the position mark control displayed in the magnification mode is controlled to move synchronously from the first position.
  • the first position is the starting position of the movement operation.
  • Synchronous movement means that during the movement operation, in response to the user continuing to press the location mark control X3, the location mark control X3 is controlled to be synchronously displayed at the pressed position corresponding to the movement operation on the map.
  • the position mark control X3' represents the moved position mark control X3.
  • the direction of the arrow between the two is the direction of the movement operation, and the dotted line is the movement trajectory of the mark control X3.
  • step 3033B in response to the move operation being released at the second position, the position mark control displayed in the magnification mode is moved to the second position.
  • the movement operation when the movement operation is released at the second position, it means that when the user's finger moves to the second position, he lifts his finger to stop pressing the map.
  • the second position is the end position of the movement operation.
  • the second position of the position mark control X3 ′ on the map 501A is the end position of the movement operation.
  • the redundant location marker controls can also be deleted in the following manner: in response to a selection operation on any location marker control, in the selected state (for example: reverse color, Highlight, check mark, cross mark, etc.) to display the selected position mark control; in response to the deletion operation for the selected state position mark control, delete the selected state position mark control.
  • a selection operation on any location marker control in the selected state (for example: reverse color, Highlight, check mark, cross mark, etc.) to display the selected position mark control; in response to the deletion operation for the selected state position mark control, delete the selected state position mark control.
  • Deleting means hiding or blocking the position mark control of the second virtual object in the map, or displaying the position mark control of the second virtual object in a blurred manner.
  • Figure 6B is a schematic diagram of a message processing method in a virtual scene provided by an embodiment of the present application.
  • the map 501A includes a mark point Q1, a position mark control X4 (representing the second virtual object numbered 4), and a delete control 601B.
  • a cross 602B is displayed on the selected mark point Q1 and the position mark control X4 to indicate the selected state.
  • the deletion control 601B being triggered, the selected mark point Q1 and the position mark control X4 are deleted.
  • the lower figure of Figure 6B shows the map 501A with the marker point Q1 and the position marker control X4 deleted.
  • the position mark control may also be automatically deleted in the following manner: in response to a move operation for any position mark control, the position mark control of each second virtual object that has not been moved is hidden.
  • Figure 6C is a schematic diagram of a message processing method in a virtual scene provided by an embodiment of the present application.
  • a pressing operation is applied to the position mark control X3.
  • the map 501A includes a position mark control X5 (representing the second virtual object numbered 5) and a position mark control During the movement of mark control X3, the position mark control X5 and position mark control X4 that have not been moved are hidden. For example, if the movement operation for position mark control X3 is released, the hidden position mark control X5 and position mark control X4 are restored.
  • step 304A a message is sent to the second virtual object.
  • the message is used to instruct the second virtual object to arrive at the second location and execute the instruction.
  • the message carries the second location and the instruction, and the message is a point-to-point message.
  • step 303A and step 304A are executed simultaneously, and the message types include voice messages, text messages, and voice-text mixed messages.
  • the message may be sent to the second virtual object in any of the following ways:
  • the message type selection control In response to the movement operation on the position mark control being released, the message type selection control is displayed, and in response to the selection operation on the type selection control, a message is sent to the second virtual object based on the selected message type.
  • the message type selection control includes the following message types: voice message, text message, and voice and text mixed message.
  • the mixed voice and text message is presented in the following manner: the text of the message is displayed in the human-computer interaction interface of the second virtual object, and the text message and the corresponding voice are simultaneously played to the second virtual object.
  • the method of indicating the second virtual object through a message includes:
  • the message content including the instruction to the second virtual object in the form of voice or text, and display at least one of the following in the map interface corresponding to the second virtual object: the location mark of the second location, the second location relative to The position of the second virtual object marks the direction of the control, and the position of the second virtual object marks the path between the control and the second position. In this way, the second position does not need to be included in the voice or text message.
  • the text content of the text message is "attack the enemy", and the path between the second virtual object and the second location where the enemy virtual object is located is displayed on the map interface corresponding to the second virtual object.
  • the text content does not contain an explicit second position, but the second position is indicated to the second virtual object by displaying a path.
  • the message content is "attack the enemy on the plain (3216, 4578)", and the path between the second virtual object and the second location where the enemy virtual object is located is displayed on the map interface corresponding to the second virtual object.
  • (3216, 4578) is the position coordinate of the second position
  • "on the plain” is the description of the second position.
  • the second position can be displayed in the following manner: when a position mark control or position mark exists in the second position, the position mark control or position mark is highlighted (for example: highlighted, circled with a callout frame, Display in other colors, bold display, flashing display, etc.); when there is no position mark control or position mark at the second position, a position mark is displayed at the second position.
  • Figure 7B is a schematic diagram of the virtual scene interface corresponding to the second virtual object provided by the embodiment of the present application; a map 702 is displayed in the upper right corner of the virtual scene 701, and the screen related to the movement operation in the map corresponding to the first virtual object is Synchronously displayed on the map 702 of the second virtual object, the message is more eye-catching and facilitates timely response by the user corresponding to the second virtual object.
  • the virtual scene 701 shows the text content 703 of the message "Gather at teammate No. 2", that is, gather at the location of the virtual object corresponding to the user who sent the message, where teammate No. 2 refers to the first virtual object.
  • the direction and path between the position mark control of the second virtual object and the second position are displayed in the map 702 .
  • the instruction carried by the message can be determined in the following manner: displaying an instruction control inside the map or outside the map, where the instruction control includes multiple types of candidate instructions; responding to the instruction control During the instruction selection operation of any candidate instruction (the instruction selection operation can be performed before or after the move operation), the selected candidate instruction is displayed in the selected state, and the selected candidate instruction is used as the instruction carried in the message.
  • FIG. 7A is a schematic diagram of an arrangement of command controls provided by an embodiment of the present application.
  • the command types corresponding to the command control 502A include: attack commands, defense commands, and movement commands.
  • the dark color indicates that the candidate instruction is selected.
  • the selected state can also be displayed by highlighting, bolding, check marks, etc.
  • the selected candidate instruction in response to an instruction selection operation for any candidate instruction in the instruction control, is maintained in the selected state before receiving the next instruction selection operation; or, before sending the instruction to After the second virtual object sends the point-to-point message, it switches from displaying that the selected candidate instruction is in the selected state to displaying that the default instruction is in the selected state.
  • the default instruction is a candidate instruction set to be in an automatically selected state among multiple types of candidate instructions.
  • the default instruction can be the first instruction at the head of all candidate instructions corresponding to the descending order of usage probability.
  • the movement instruction is often used in virtual scenes, and the movement instruction is used as the default instruction.
  • the default command is a movement command.
  • the attack command in the selected state is switched to the unselected state, and the movement command is switched to the selected state.
  • the user selects a movement instruction, and the movement instruction is maintained in the selected state between the next instruction selection operations.
  • the user by automatically maintaining the selected state of the candidate instructions in the instruction control, or switching the default instruction to the selected state, the user is prevented from repeatedly operating the instruction control, saving the time of sending messages and saving computing resources.
  • the instruction carried by the message may be determined in the following manner: displaying an instruction control inside the map or outside the map, where the instruction control includes multiple types of candidate instructions, and the multiple types of One of the candidate instructions is in an automatically selected state; in response to not receiving an instruction selection operation for any candidate instruction in the instruction control within a set period of time, the candidate instruction in the automatically selected state is used as an instruction carried in the message.
  • the set time can be 5 minutes. Assume that the movement command in the command control is in the automatically selected state. If no command selection operation is received within 5 minutes, the movement command in the automatically selected state will be used as the command carried by the message.
  • the instructions carried in the message can be selected for the user without frequent operations by the user, thus saving the time of sending messages and saving computing resources.
  • the multiple candidate instructions can also be sorted in any of the following ways:
  • the sorting using probability changes adaptively according to the second virtual object dragged each time. That is, the sorting order is different for different types of second virtual objects.
  • the second virtual object A often receives messages carrying attack instructions.
  • the instruction control 502A ⁇ in Figure 7A in the corresponding sorting of the second virtual object A, Offensive commands are ranked highest, while other commands are ranked lower.
  • the second virtual object B often receives messages carrying attack instructions.
  • the movement instruction has the highest ranking.
  • the user's common instructions or the common instructions for a certain second virtual object are displayed at the head of the instruction control, which facilitates the user to quickly find the required instructions and facilitates efficient to send messages.
  • the usage probability of each candidate instruction can be determined in the following manner: calling a neural network model for prediction processing based on parameters of virtual objects in the virtual scene to obtain the usage probability corresponding to each candidate instruction.
  • the parameters of the virtual object include at least one of the following: the position and attribute value of the first virtual object, where the attribute value includes combat power, health value, defense value, etc.; the position and attribute value of the second virtual object; the first virtual object The difference between the attribute values of the camp to which you belong and the attribute values of the enemy camp (the difference in attribute values can represent the strength comparison between the enemy and our camps).
  • the neural network model is trained based on the game data of at least two camps.
  • the game data includes: the positions and attribute values of multiple virtual objects in at least two camps, as well as the instructions and failures executed by the virtual objects of the winning camp. Instructions executed by virtual objects of the camp; each instruction executed by the virtual object of the winning camp is labeled with probability 1, and each instruction executed by the virtual object of the losing camp is labeled with probability 0.
  • the neural network model can be a graph neural network model or a convolutional neural network model.
  • the initial neural network model is trained based on the game data, and the predicted probability is calculated based on the game data through the initial neural network model as the actual probability of the label. The difference is brought into the loss function to calculate the loss value.
  • the loss function can be the mean square error loss function, the mean absolute error loss, the quantile loss, the cross-entropy loss function, etc.
  • back propagation is performed in the initial neural network model, and the parameters of the neural network model are updated through the back propagation (BP) algorithm, so that the trained neural network model is based on the current values of virtual objects in the same camp. Parameters capable of predicting the usage probability of each candidate instruction being currently used by the first virtual object.
  • the usage probability is obtained through the neural network model, which improves the accuracy of obtaining the usage probability.
  • the candidate instructions are sorted based on the usage probability, so that the user can quickly find the required instructions and send messages efficiently.
  • FIG. 3C is a schematic flowchart of a message processing method in a virtual scene provided by an embodiment of the present application.
  • the message sent to the second virtual object may be determined through steps 3041C to 3042C, which will be described in detail below.
  • step 3041C based on the movement operation, the first position and the second position, the starting position characteristics and the end position characteristics corresponding to the movement operation in the virtual scene are determined, and the starting point position characteristics and the end position characteristics are used as trigger conditions.
  • the starting position of the moving operation is the first position
  • the end position of the moving operation is the second position
  • Location features can be the area where the location is located, whether there are markers near the location, etc.
  • step 3041C can be implemented in the following manner: determining the first area (for example, a non-safe area or a safe area) where the first position is located in the virtual scene, and the second area where the second position is located in the virtual scene.
  • Second area for example: non-safety area or safe area
  • Figure 5C is a schematic diagram of a message processing method in a virtual scene provided by an embodiment of the present application.
  • the first position corresponding to the position mark control X3 is in the safety zone 501C, and the end position of the movement operation is within the safety zone 501C (the position of the position mark control X3 ⁇ ).
  • the marker class for the end position is Unmarked.
  • the marker type corresponding to the second position in the map interface can be determined in the following manner: detecting a partial area in the map centered on the second position, for example, refer to Figure 6G, which is provided by an embodiment of the present application.
  • the partial area 601G may be a circular area centered on the second position, and the radius R of the circular area is positively related to the recognition accuracy of misoperation.
  • the detected marker type corresponding to the location marker control closest to the second location is used as the marker type corresponding to the second location in the map interface; when no location marker control is detected, Use No Marker as the corresponding marker type in the map interface for the second location.
  • the mark type corresponding to the second position in the map interface is a virtual object position mark.
  • step 3042C a query is performed in the database based on the trigger condition, and messages matching the trigger condition are obtained.
  • the database can store the correspondence between different messages and different trigger conditions.
  • the content of the message is to go to the second location set and enter the virtual vehicle.
  • the virtual vehicle is a drivable vehicle as an example.
  • the position mark control Z1 of the virtual vehicle is displayed near the position mark control X2 of the first virtual object.
  • the move operation moves the position mark control X3 to the position mark control X2, and the message content can be "Gather to the position of teammate 2 and get on the bus.”
  • the content of the message when the type of instruction is a movement instruction and there is no virtual vehicle at the second location, the content of the message is to go to the second location set.
  • the content of the message when the virtual vehicle does not exist at the second location of the movement operation, the content of the message may be "Move to the designated location".
  • Figure 5E is a schematic diagram of a message processing method in a virtual scene provided by an embodiment of the present application.
  • the command in the current selected state of the command control 502A is an attack command.
  • a small icon of the attack command can be displayed near the moved position mark control X3 ⁇ .
  • the small icon of the attack command is synchronously displayed on the map of the second virtual object, which facilitates the second virtual object to determine the location to be attacked.
  • the content of the message can be "attack the designated location".
  • the content of the message is to go to the second position for defense.
  • the defensive instructions are processed in the same way as the offensive instructions and will not be described in detail here.
  • the corresponding message content can be "defend the designated position".
  • Figure 3D is a schematic flowchart of a message processing method in a virtual scene provided by an embodiment of the present application. Messages can also be sent to the second virtual object outside the map through steps 302D to 304D, which will be described in detail below.
  • step 302D a location mark control in which the virtual object does not appear is displayed outside the map.
  • the non-appeared virtual object is a second virtual object that does not currently appear in the partial area.
  • FIG. 6D is a schematic diagram of a message processing method in a virtual scene provided by an embodiment of the present application.
  • the position mark control X4 is a second virtual object numbered 4 and located outside the range of the virtual scene corresponding to the map.
  • Location mark control X4 is displayed on the upper edge of the outside of map 501A.
  • step 303D in response to the move operation for the position mark control in which the virtual object does not appear, the position mark control is moved from the outside of the map to the second position.
  • step 303D the movement operation in step 303D is the same as step 303A, and will not be described again here.
  • the hand shape represents a pressing operation.
  • the position mark control X4 is moved from the outside of the map 501A to the second position inside the map 501A (the position where the position mark control X4 ⁇ is located).
  • the position mark control X4 ⁇ is Used to characterize the moved position mark control X4.
  • step 304D a message is sent to the virtual object that does not appear.
  • the message carries the second location and instructions, and the message is a point-to-point message.
  • step 303D and step 304D are executed simultaneously.
  • step 304D determines the content of the message in step 304D, please refer to the above steps 3041C to 3042C.
  • the method of sending the message in step 304D is the same as that in step 304A, which will not be described again here.
  • FIG. 3E is a schematic flowchart of a message processing method in a virtual scene provided by an embodiment of the present application.
  • step 303A can be implemented through step 3031E and step 3032E, and step 304A can be implemented through step 3041E, which will be described in detail below.
  • step 3031E in response to the batch selection operation, multiple location mark controls are displayed in a selected state.
  • Figure 6E is a schematic diagram of a message processing method in a virtual scene provided by an embodiment of the present application.
  • the position mark control X3, the position mark control X4, and the position mark control X5 are respectively marked with check marks 601E.
  • the above three position mark controls are selected in batches and displayed in a selected state.
  • step 3032E in response to the move operation, the plurality of position mark controls are moved from the first positions where they are respectively located to the second position.
  • the move operation is directed to any one of the multiple selected position mark controls.
  • the hand is pressed at the position mark control X3, and the move operation only acts on the position mark control.
  • the position mark control X3 moves along with the movement trajectory of the movement operation on the map.
  • each position mark control in the selected state that does not follow the movement operation moves from the first position corresponding to each position mark control to the second position.
  • the hand shape stays in the second position, that is, the movement operation is released in the second position, and the position mark control X4 and the position mark control X5 are moved to the second position.
  • step 3041E messages are sent to second virtual objects respectively corresponding to the plurality of position mark controls.
  • the message carries the second location as well as the instructions.
  • Each second virtual object receives the same second location and instructions.
  • step 3032E and step 3041E are executed simultaneously.
  • the message sending method in step 3041E is the same as the above step 304A, and will not be described again here.
  • FIG. 3F is a schematic flowchart of a message processing method in a virtual scene provided by an embodiment of the present application.
  • messages can also be sent to the unmoved virtual object through steps 305F to 306F, which will be described in detail below.
  • step 305F a send message control corresponding to the unmoved virtual object is displayed in the map.
  • the unmoved virtual object is the second virtual object to which the message has not been sent, and the send message control is used to send the message repeatedly.
  • FIG. 6F is a schematic diagram of a message processing method in a virtual scene provided by an embodiment of the present application.
  • the position mark control X3 is moved to the second position.
  • the position mark control X3 is displayed at the current location of the second virtual object numbered 3.
  • position mark control X4 has not been moved.
  • the second virtual object corresponding to the position mark control X4 is an unmoved object, and the message sending control F1 is displayed near the position mark control X4.
  • Send message control F1 is used to repeatedly send the last message sent.
  • step 306F in response to a triggering operation on any message-sending control, a message is sent to the unmoved virtual object corresponding to the triggered message-sending control.
  • the message received by the second virtual object corresponding to the position mark control X3 is "gather to the designated position".
  • the message sending control F1 corresponding to the position mark control X4 is triggered, the second virtual object corresponding to the position mark control X4 also receives the "gather to the designated position" message.
  • the embodiment of the present application displays the position mark control of the second virtual object in the same camp as the first virtual object in the map interface.
  • a corresponding message is sent to the second virtual object based on the movement operation.
  • the map interface of the virtual scene is used to realize the quick sending of point-to-point messages. There is no need to speak or enter text.
  • the message can be sent quickly by dragging the position mark control, saving the time required for sending messages; and, Since the message is only sent to the second virtual object, precise point-to-point message sending is achieved, avoiding interference with other virtual objects in the same camp; at the same time, the location mark control in the map interface of the virtual scene is reused, eliminating the need for in-person New controls are set up in the computer interaction interface for message sending, and point-to-point message sending can be realized without the use of radio equipment (such as microphones), which saves the computing resources required for virtual scenes.
  • radio equipment such as microphones
  • Messages visible or audible to the whole team may cause interference to some teammates (on the one hand, high concurrency of messages visible or audible to the whole team can easily cause teammates to be unable to extract effective messages; on the other hand, information visible to the whole team consumes computing resources. Wasteful, occupying the running memory of teammates' clients), and these communication methods cannot achieve individual communication for a certain teammate.
  • the message processing method in the virtual scene provided by the embodiment of the present application reuses the map of the virtual scene, and can quickly send point-to-point messages to teammates by moving the position mark control (for example: teammate icon control) corresponding to the teammate on the map, with low cost.
  • the method of calculating resource consumption improves the efficiency of message sending.
  • FIG. 8 is an optional flow diagram of a message processing method in a virtual scene provided by an embodiment of the present application, which will be described in conjunction with the steps shown in FIG. 8 .
  • step 801 it is determined whether the duration of the pressing operation on the teammate icon control in the map is greater than the pressing time threshold.
  • the map is a virtual map corresponding to the virtual scene.
  • the virtual map is bound to a coordinate system, and the coordinates of each position in the virtual scene are fixed in the virtual map.
  • the teammate icon control is a position mark control used in the map to represent the second virtual object of the same team (or camp) as the first virtual object corresponding to the user.
  • the teammate icon control is a position mark control that can be operated (for example: movement operation or press operation).
  • Figure 5A is a schematic diagram of a message processing method in a virtual scene provided by an embodiment of the present application; in the map 501A, the position mark control X2 is the position mark control of the first virtual object, and the number 2 means that the number of the first virtual object in the team or camp is 2.
  • the position mark control X3 is the second virtual object labeled 3.
  • a map zoom control 503A and an instruction control 502A are provided on the outer edge of the map 501A.
  • the map zoom control 503A is used to adjust the ratio between the map and the virtual scene. Move the circular icon of the map zoom control 503A toward the plus sign 504A to zoom in on the map. , conversely, moving toward the minus sign 505A will shrink the map.
  • Instruction control 502A is used to switch the type of instruction carried in the message sent to teammates.
  • the pressing time threshold may be 0.5 seconds.
  • the teammate icon control When the user presses and holds the teammate icon control for 0.5 seconds, it is determined that the icon trigger operation is received, and the teammate icon control can move on the map according to the movement operation.
  • the teammate icon control In response to the icon triggering operation, the teammate icon control is displayed in a magnified mode, and the teammate icon control moves with the movement operation (the movement operation, that is, maintaining the pressing operation, and sliding or dragging the pressed position on the human-computer interaction interface) .
  • FIG. 5B is a schematic diagram of a message processing method in a virtual scene provided by an embodiment of the present application.
  • the position mark control X3 is displayed in an enlarged mode and is larger than the position mark control X3 in FIG. 5A .
  • the teammate icon control is displayed in an enlarged mode, which makes the manipulated position mark control more eye-catching, facilitates user operation, and improves interaction efficiency.
  • step 802 in response to the movement operation for the teammate icon control, the teammate icon control is moved to an end position of the movement operation.
  • the moving operation may be a continuous dragging operation or a sliding operation.
  • the hand shape represents the pressing operation of the user's finger on the position mark control X3.
  • the position mark control X3 can be moved and move the finger from the first position where the position mark control X3 is currently located to the second position in the direction of the arrow.
  • Control X3 moves according to the position of the mobile operation on the human-computer interaction interface.
  • the moving operation stops or is released, the stopped or released position is used as the end position of the moving operation, that is, the second position.
  • the position mark control X3' at the second position is the moved position mark control.
  • the position mark control X3' is temporarily displayed at the second position.
  • the position mark control of the second virtual object is restored to the corresponding current position of the second virtual object in the map.
  • step 803 the currently selected instruction type is determined.
  • FIG. 7A is a schematic diagram of an arrangement of command controls provided by an embodiment of the present application.
  • the command types corresponding to the command control 502A include: attack commands, defense commands, and movement commands.
  • step 804 is executed to determine the starting position characteristics and end position characteristics of the movement operation.
  • the starting point position characteristics refer to the area corresponding to the starting point in the virtual scene, such as: safe area and non-safe area; in the non-safe area, the health value of the virtual object will periodically decrease.
  • the safe zone is an area in the virtual scene where the health value of the virtual object will not enter a periodic decline state.
  • the end position characteristics refer to the end position in the corresponding area in the virtual scene (for example: safe area, non-safe area) and whether there is a position mark control of the virtual object at the end position (within the circular area with the end position as the center).
  • Markers are points on a map that represent locations.
  • FIG. 5D is a schematic diagram of a message processing method in a virtual scene provided by an embodiment of the present application.
  • the marker mode is entered, and in response to the selection operation on any location on the map, the first custom marker point corresponding to the selected location is displayed, for example: the first custom marker point D1 ;
  • the second custom mark point D2 In response to the trigger operation for the second mark control 502D, display the second custom mark point D2 at the position where the position mark control X2 of the first virtual object is located.
  • the characteristics of the end position corresponding to the movement operation are: there is a mark point in the safe zone, and the mark point is the first custom mark point D1.
  • the message when there is a corresponding position mark control or mark point at the end position, the message correspondingly contains content related to the position mark control or mark point. For example, if there is a virtual vehicle at the end location, the message may include “get on the vehicle”, “go to the vehicle location to board the vehicle”, etc. If the end point is in the safe zone, the message may include "Entering the safe zone" and other contents.
  • step 805 based on the starting point location characteristics and the end point location characteristics, the corresponding message is matched in the message triggering condition library.
  • the trigger conditions sent in advance for each triggerable message are summarized into a database (message trigger conditions library).
  • the message trigger condition library stores messages and trigger conditions corresponding to the messages.
  • the end position characteristics of the movement operation or the end position characteristics and the starting position characteristics
  • the corresponding message is sent to the teammate corresponding to the moved teammate icon control.
  • the starting position and end position of the sliding operation are used as the triggering conditions for the sliding operation, and the same triggering conditions are matched in the triggering condition library.
  • the starting position is used to determine the behavioral content (circle/movement) of the virtual object in the sent message.
  • the end position is used to determine the destination noun in the message (specified location/virtual object location/vehicle).
  • the relationship between the trigger condition and the message is as follows:
  • the end position to which the teammate icon control is moved exists.
  • the starting position and end position of the vehicle are both in the safe zone, and the corresponding message is "Go to the vehicle position and load the vehicle.”
  • the end position to which the teammate icon control is moved is the position of the first virtual object, and the corresponding message is "Gather to me”.
  • the starting position of the teammate icon control is outside the safety zone, and the end position is within the safety zone, and the corresponding message is "Enter the safety circle”.
  • different starting point position characteristics and end point position characteristics correspond to different messages, as described in detail below.
  • Figure 5C is a schematic diagram of a message processing method in a virtual scene provided by an embodiment of the present application.
  • the position mark control X3 moves from outside the safety area 501C to within the safety area 501C. Then you can send a text message of "Fast forward to the safe zone" to the designated teammate, and display the end position corresponding to the movement operation in the map interface corresponding to the designated teammate.
  • a "Go to specified” message will be sent to the specified teammate.
  • "Place” message the designated location is the location corresponding to the location mark, and the location mark is highlighted in the map interface of the designated teammate (for example: bolding the location mark, displaying the location mark in different colors, highlighting the location mark); at the same time If the teammate is outside the safe zone, the message "Enter the safe zone and go to the designated location" will be sent.
  • a "Gather with me” message is sent to the designated teammate.
  • a message of "get on the bus quickly” or “get on the bus at a certain location” is sent to the designated teammate, where a certain location refers to the location in the virtual scene.
  • the server when the teammate icon control moves according to the mobile operation, the server starts to compare the starting position characteristics of the mobile operation with the trigger conditions in the message trigger condition library. When the mobile operation ends, the server starts to compare the starting position characteristics obtained by the search. Among the various corresponding messages, the search is continued based on the end position characteristics of the mobile operation, the matching trigger conditions are obtained, and the messages corresponding to the matched trigger conditions are sent. For example: the position of the first virtual object corresponding to the user is located in the safe zone, and a move operation is applied to the teammate icon control outside the safe zone. The end position of the move operation is the current position of the position mark control of the first virtual object.
  • the text content of the message sent to the teammate "Quickly enter the safe zone and come to my gathering" is highlighted in the map interface corresponding to the second virtual object (the teammate who received the message from the first virtual object) (for example: highlight display, Encircle the position mark control with a label box, display it in different colors or bold display, etc.)
  • the position corresponding to the first virtual object Set the mark control to facilitate the user to control the second virtual object to go to the location of the first virtual object.
  • step 806 the matched message is sent to the teammate corresponding to the teammate icon control.
  • the message is sent when the movement operation stops (for example, the user stops moving after moving the position mark control to a certain position) or is released (for example, the user releases the finger pressing the position mark control).
  • the message sending method may be a voice message, a text message, or a voice-mixed text message.
  • Ways of indicating the second virtual object through a message include:
  • the text content of the text message is "Go to Building B (1234, 5678)", “Building B (1234, 5678)" is the second location, “Go” represents the movement instruction, (1234, 5678) is the building B's coordinate position on the map.
  • the text content of the text message is "Go to the second floor of Building A", "Go” represents a movement instruction, and "The second floor of Building A” is a clear second location.
  • the message content including the instruction to the second virtual object in the form of voice or text, and display at least one of the following in the map interface corresponding to the second virtual object: the location mark of the second location, the second location relative to The position of the second virtual object marks the direction of the control, and the position of the second virtual object marks the path between the control and the second position.
  • the voice or text message may not contain the second position, or may not contain an explicit second position.
  • FIG. 7B is a schematic diagram of the virtual scene interface corresponding to the second virtual object provided by the embodiment of the present application
  • a map 702 is displayed in the upper right corner of the virtual scene 701, and the movement operation in the map corresponding to the first virtual object is related to
  • the picture is synchronously displayed on the map 702 of the second virtual object, making the message more eye-catching and facilitating timely response by the user corresponding to the second virtual object.
  • the text content 703 of the message "Gather at Teammate No. 2" is displayed in the virtual scene 701, where Teammate No. 2 refers to the first virtual object, that is, the location set of virtual objects corresponding to the user who sent the message.
  • the map 702 displays the direction and path between the position mark control of the second virtual object and the second position.
  • the text content of the message is "Gather to the specified location (1472, 2147)"
  • the text content is displayed in text form or voice form on the human-computer interaction interface corresponding to the second virtual object, and in the map of the second virtual object Display the position mark of the specified position, the path between the position mark control of the second virtual object and the position mark corresponding to the specified position, and the direction of the specified position relative to the position mark control of the second virtual object.
  • (1472, 2147) is the position coordinate corresponding to the specified position on the map.
  • Figure 5F is a map schematic diagram of the message processing method in the virtual scene provided by the embodiment of the present application.
  • the location mark 501F of the second location is synchronously displayed on the map interface of the second virtual object that receives the message.
  • the position mark 501F is displayed at the second position (the position mark is a circle in Figure 5F.
  • the position mark can also be presented in the form of highlights, mark boxes, etc., and can also be marked in different colors to make the second position more eye-catching.
  • the dotted line between the position mark 501F and the position mark control X3 is the path between the two, and the arrow from the position mark control X3 pointing to the position mark 501F represents the direction between the two.
  • the position mark control is a control that follows the position of the marked object in the virtual scene and moves accordingly on the map.
  • the position mark control is restored to the current position of the second virtual object.
  • the position mark control X3' of the second position is hidden. If the current position of the second virtual object is maintained at The first position, then the position mark control X3 exiting the magnification mode is restored to the first position (exiting the magnification mode, that is, displaying the position mark control X3 in its original size).
  • step 807 is executed to match the corresponding message in the message trigger condition library based on the end position characteristics of the movement operation.
  • step 807 the method of determining the terminal location characteristics in step 807 is the same as the above step 804, and the message matching principle is the same as the above step 805, which will not be described again here.
  • Both defensive instructions and offensive instructions are instructions for operating virtual objects.
  • the message matching principle corresponding to the defensive instructions is the same as that of the offensive instructions, which will not be described again here.
  • Figure 5E is a schematic diagram of a message processing method in a virtual scene provided by an embodiment of the present application.
  • the command in the current selected state of the command control 502A is an attack command.
  • the small icon of the attack command can be displayed in the second position.
  • the small icon of the attack command is synchronously displayed on the map of the second virtual object, which facilitates the second virtual object to determine the location to be attacked.
  • the location mark 501F of the second location is displayed in the map interface of the first virtual object.
  • the location mark 501F of the second location is synchronously displayed on the second virtual object that receives the message. in the corresponding map interface.
  • a "defense mark position” message is sent to the designated teammate.
  • a "protect a teammate” message is sent to the designated teammate, where a teammate refers to a teammate number or name.
  • messages are classified and queried based on the type of instruction, which improves the efficiency of querying messages in the message trigger condition library, so that messages can be sent immediately when the mobile operation is released or terminated, thus improving the efficiency of message sending.
  • step 806 is executed to send the matching message to the teammate corresponding to the teammate icon control.
  • the embodiment of the present application reuses the position mark control in the map of the virtual scene, so that the user can quickly send point-to-point messages to teammates through the movement operation of the position mark control representing teammates on the map.
  • the point-to-point message sending method avoids It causes interference to irrelevant players (players who do not need to receive the message), avoiding the burden on the running memory of the clients of irrelevant players. At the same time, it saves the graphics computing resources required for the virtual scene and will not be affected by the radio or playback equipment. restrictions, enabling efficient message sending in virtual scenes.
  • the message processing device 455 in the virtual scene stored in the memory 430
  • the software module in the device 455 may include: a display module 4551, configured to display a map of at least a partial area of the virtual scene in the map interface corresponding to the first virtual object; a display module 4551, further configured to respond to at least a partial area appearing in the partial area.
  • a second virtual object that displays a location mark control on the map that is used to represent the first location where the second virtual object is currently located, where the second virtual object is any virtual object that belongs to the same camp as the first virtual object; message The sending module 4552 is configured to move the position mark control from the first position to the second position in response to the movement operation on the position mark control, and send a message to the second virtual object, where the message carries the second position and the instruction.
  • the messaging module 4552 is further configured to display an instruction control inside the map or outside the map, wherein the instruction control includes multiple types of candidate instructions; in response to an instruction for any one of the candidate instructions in the instruction control In the selection operation, the selected candidate instruction is used as the instruction carried by the message.
  • the message sending module 4552 is further configured to, in response to an instruction selection operation for any candidate instruction in the instruction control, maintain the selected candidate instruction in the selected state before receiving the next instruction selection operation; or , after sending the message to the second virtual object, switching from displaying that the selected candidate instruction is in the selected state to displaying that the default instruction is in the selected state, wherein the default instruction is set to be in the automatically selected state among multiple types of candidate instructions. candidate instructions.
  • the messaging module 4552 is further configured to display an instruction control inside the map or outside the map, wherein the instruction control includes multiple types of candidate instructions, and one of the multiple types of candidate instructions is at Automatically selected state; in response to not receiving an instruction selection operation for any candidate instruction in the instruction control within a set period of time, the candidate instruction in the automatically selected state is used as an instruction carried in the message.
  • the message sending module 4552 is further configured to, when the instruction control includes multiple candidate instructions, sort the multiple candidate instructions in any of the following ways: sorting in descending order or in ascending order according to the frequency of use of each candidate instruction. Sorting; sorting according to the order in which each candidate instruction is set; sorting in ascending order or descending order according to the usage probability of each candidate instruction.
  • the message sending module 4552 is also configured to call the neural network model for prediction processing based on the parameters of the virtual object in the virtual scene to obtain the usage probability corresponding to each candidate instruction; wherein the parameters of the virtual object include at least one of the following Items: the position and attribute value of the first virtual object, where the attribute value includes combat power and health value; the position and attribute value of the second virtual object; the difference between the attribute value of the camp to which the first virtual object belongs and the attribute value of the hostile camp; Among them, the neural network model is trained based on the game data of at least two camps. The game data includes: the positions and attribute values of multiple virtual objects in at least two camps, as well as the instructions and failures executed by the virtual objects of the winning camp. Instructions executed by virtual objects of the camp; each instruction executed by the virtual object of the winning camp is labeled with probability 1, and each instruction executed by the virtual object of the losing camp is labeled with probability 0.
  • the message sending module 4552 is further configured to display the selected location mark control in a selected state in response to a selection operation for any one location mark control; in response to a selection operation for any location mark control; Deletion operation of the selected position mark control, deletes the selected position mark control.
  • the message sending module 4552 is also configured to display a location mark control of a non-appearing virtual object outside the map, where the non-appearing virtual object is a current non-appearing virtual object. a second virtual object appearing in a partial area; in response to a move operation for a position mark control of the non-appearing virtual object, moving the position mark control from outside the map to the second position, and sending a message to the non-appearing virtual object, wherein , the message is used to indicate that the virtual object has not arrived at the second location and executed the instruction.
  • the message sending module 4552 is further configured to display the multiple second virtual objects in a selected state in response to the batch selection operation.
  • a position mark control in response to the movement operation, move the plurality of position mark controls from the first position to the second position respectively, and send a message to the second virtual object respectively corresponding to the plurality of position mark controls, wherein the message Used to instruct the second virtual objects respectively corresponding to the multiple position mark controls to arrive at the second position and execute the instruction.
  • the message sending module 4552 is further configured to display in the map a send message control corresponding to the unmoved virtual object, wherein the unmoved virtual object is one to which the message has not yet been sent.
  • the second virtual object the send message control is used to send messages repeatedly; in response to the trigger operation for any send message control, a message is sent to the unmoved virtual object corresponding to the triggered send message control.
  • the content of the message when the type of instruction is a movement instruction and a virtual vehicle exists at the second location, the content of the message is to go to the second location set and enter the virtual vehicle; when the type of instruction is a movement instruction and the second location does not When there is a virtual vehicle, the content of the message is to go to the second position to assemble; when the type of instruction is a defense instruction, the content of the message is to go to the second position for defense; when the type of instruction is an attack instruction, the content of the message is to go to Second position and attack.
  • the message sending module 4552 is also configured to send the message in any of the following ways: To the second virtual object: in response to the movement operation for the position mark control being released, display the message type selection control, wherein the message type selection control includes the following message types: voice message, text message, voice and text mixed message; in response to The selection operation of the type selection control sends a message to the second virtual object based on the selected message type; in response to the movement operation of the position mark control being released, the message is sent to the second virtual object based on the set message type.
  • the display module 4551 is further configured to display the map interface in any one of the following ways before displaying the map of at least part of the virtual scene in the map interface corresponding to the first virtual object: in the virtual scene interface Display the virtual scene, and display the map interface on the floating layer covering part of the virtual scene interface; display the virtual scene in the virtual scene interface, and display the map interface in an area outside the virtual scene interface.
  • the map is a preview of the entire area of the virtual scene, or the map is a preview of a partial area of the virtual scene, where the partial area is an area radiating outward from the center of the first virtual object.
  • the message sending module 4552 is further configured to display the position mark control corresponding to the press operation in an enlarged mode in response to the duration of the pressing operation for the position mark control reaching the pressing duration threshold; in response to the pressing operation for the position mark control
  • the move operation controls the position mark control displayed in the magnification mode to move synchronously from the first position; in response to the move operation being released at the second position, the position mark control displayed in the amplification mode moves to the second position.
  • the message sending module 4552 is also configured to, before sending the message to the second virtual object, determine the starting position characteristics and the end position corresponding to the moving operation in the virtual scene based on the moving operation, the first position and the second position.
  • the starting point location characteristics and the end point location characteristics as trigger conditions; query in the database based on the trigger conditions, and obtain messages that match the trigger conditions; among them, the database stores the corresponding relationships between different messages and different trigger conditions .
  • the message sending module 4552 is further configured to determine the first area where the first location is located in the virtual scene and the second area where the second location is located in the virtual scene; determine whether the second location is in the map interface The corresponding mark type, where the mark type includes no mark, virtual object position mark, and virtual vehicle position mark; use the first area as the starting position feature of the movement operation, and use the second area and the mark type as the end position of the movement operation. feature.
  • the message sending module 4552 is further configured to detect a partial area in the map centered on the second location; when at least one location marker control is detected, the detected location marker control closest to the second location corresponds to The mark type is used as the corresponding mark type of the second position in the map interface; when the position mark control is not detected, no mark is used as the corresponding mark type of the second position in the map interface.
  • the display module 4551 is further configured to, in response to the appearance of at least one virtual vehicle in the partial area, display a location mark control on the map to represent that the virtual vehicle is in the second location, wherein the location of the virtual vehicle
  • the mark type of the mark control is the virtual vehicle position mark.
  • the message sending module 4552 is further configured to, in response to a trigger operation for the first mark control in the map, display the entry into the place mark mode, and in response to a click operation on the map, display the first mark control at the clicked location on the map.
  • Custom mark points wherein the first custom mark point is used to be displayed synchronously in the map interface corresponding to the second virtual object; in response to the trigger operation for the second mark control in the map, the first virtual object is currently displayed on the map A second custom marker point is displayed at the first location, where the second custom marker point is used for synchronous display in the map interface corresponding to the second virtual object.
  • Embodiments of the present application provide a computer program product or computer program.
  • the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the message processing method in the virtual scene described above in the embodiment of the present application.
  • Embodiments of the present application provide a computer-readable storage medium storing executable instructions, in which executable instructions are stored.
  • executable instructions When executed by the processor, they will cause the processor to execute the message processing method in the virtual scene provided by the embodiment of the present application, for example, the message processing method in the virtual scene as shown in Figure 3A.
  • the computer-readable storage medium may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; it may also include one or any combination of the above memories.
  • Various equipment may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; it may also include one or any combination of the above memories.
  • Various equipment may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; it may also include one or any combination of the above memories.
  • executable instructions may take the form of a program, software, software module, script, or code, written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and their May be deployed in any form, including deployed as a stand-alone program or deployed as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • executable instructions may be deployed to execute on one computing device, or on multiple computing devices located at one location, or alternatively, on multiple computing devices distributed across multiple locations and interconnected by a communications network execute on.
  • the present application by displaying the position mark control of the second virtual object in the same camp as the first virtual object in the map, or displaying the position mark control of the second virtual object outside the map, when the second virtual object When the object's position mark control is moved, the corresponding instructions and messages are sent to the second virtual object based on the movement operation.
  • the map interface of the virtual scene is used to realize the quick sending of point-to-point messages without the need to speak or enter text.
  • the control can quickly send messages, saving the time required for message sending; and, because messages are only sent to the second virtual object, accurate point-to-point message sending is achieved, avoiding interference with other virtual objects in the same camp; At the same time, the location mark control in the map interface of the virtual scene is reused. There is no need to set up new controls in the human-computer interaction interface for message sending. Point-to-point message sending can be realized without the use of a radio device (such as a microphone), which saves money. Computing resources required for virtual scenes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供了一种虚拟场景中的消息处理方法、装置、电子设备及计算机可读存储介质及计算机程序产品;方法包括:在第一虚拟对象对应的地图界面中,显示虚拟场景的至少部分区域的地图;响应于部分区域中出现至少一个第二虚拟对象,在地图中显示用于表征第二虚拟对象当前所在的第一位置的位置标记控件,其中,第二虚拟对象是与第一虚拟对象属于相同阵营的任意一个虚拟对象;响应于针对位置标记控件的移动操作,将位置标记控件从第一位置移动到第二位置,以及向第二虚拟对象发送消息,消息用于指示第二虚拟对象到达第二位置并执行指令。

Description

虚拟场景中的消息处理方法、装置、电子设备及计算机可读存储介质及计算机程序产品
相关申请的交叉引用
本申请基于申请号为202210563612.6、申请日为2022年5月23日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及计算机技术,尤其涉及一种虚拟场景中的消息处理方法、装置、电子设备及计算机可读存储介质及计算机程序产品。
背景技术
基于图形处理硬件的显示技术,扩展了感知环境以及获取信息的渠道,尤其是虚拟场景的显示技术,能够根据实际应用需求实现受控于用户或人工智能的虚拟对象之间的多样化的交互,具有各种典型的应用场景,例如在游戏等的虚拟场景中,能够模拟虚拟对象之间的真实的对战过程。
在虚拟场景中,用户与其他用户进行沟通的方式包括:语音沟通、局内快捷标点、快捷消息等。这些消息通常都是向同阵营或者同队伍的每个人发送的,部分队友会被与自己无关的消息干扰。针对特定队友,虽然可以通过语音方式实现高效沟通,但部分用户的终端设备可能未配置语音沟通相关的硬件设备或者软件设备。相关技术暂未提出针对特定用户的高效地进行消息发送的方案。
发明内容
本申请实施例提供一种虚拟场景中的消息处理方法、装置、电子设备及计算机可读存储介质及计算机程序产品,能够在虚拟场景中高效地进行点对点的消息发送,从而消除消息对不相关用户带来的干扰。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种虚拟场景中的消息处理方法,包括:
在第一虚拟对象对应的地图界面中,显示虚拟场景的至少部分区域的地图;
响应于所述部分区域中出现至少一个第二虚拟对象,在所述地图中显示用于表征所述第二虚拟对象当前所在的第一位置的位置标记控件,其中,所述第二虚拟对象是与所述第一虚拟对象属于相同阵营的任意一个虚拟对象;
响应于针对所述位置标记控件的移动操作,将所述位置标记控件从所述第一位置移动到第二位置,向所述第二虚拟对象发送消息,其中,所述消息用于指示所述第二虚拟对象到达所述第二位置并执行指令。
本申请实施例提供一种虚拟场景中的消息处理装置,包括:
显示模块,配置为在第一虚拟对象对应的地图界面中,显示虚拟场景的至少部分区域的地图;
所述显示模块,还配置为响应于所述部分区域中出现至少一个第二虚拟对象,在所述地图中显示用于表征所述第二虚拟对象当前所在的第一位置的位置标记控件,其中,所述第二虚拟对象是与所述第一虚拟对象属于相同阵营的任意一个虚拟对象;
消息发送模块,配置为响应于针对所述位置标记控件的移动操作,将所述位置标记控件从所述第一位置移动到第二位置,以及向所述第二虚拟对象发送消息,其中,所述消息用于指示所述第二虚拟对象到达所述第二位置并执行指令。
本申请实施例提供一种电子设备,所述电子设备包括:
存储器,用于存储可执行指令;
处理器,用于执行所述存储器中存储的可执行指令时,实现本申请实施例提供的虚拟场景中的消息处理方法。
本申请实施例提供一种计算机可读存储介质,存储有可执行指令,所述可执行指令被处理器执行时实现本申请实施例提供的虚拟场景中的消息处理方法。
本申请实施例提供一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时实现本申请实施例提供的虚拟场景中的消息处理方法。
本申请实施例具有以下有益效果:
通过在地图界面中显示与第一虚拟对象同阵营的第二虚拟对象的位置标记控件,当第二虚拟对象的位置标记控件被移动时,基于移动操作向第二虚拟对象发送对应的指令与消息,利用虚拟场景的地图界面实现了点对点消息的快捷发送,与相关技术的通过发出语音或者输入文字的消息发送方式相比,通过拖动位置标记控件能够更加快捷地进行消息快捷发送,节约了消息发送所需的时间;
通过仅向第二虚拟对象进行消息发送,实现了精准的点对点消息发送,避免了对同阵营的其他虚拟对象造成干扰;
复用虚拟场景的地图界面中的位置标记控件来发送消息,相较于在人机交互界面中设置新的控件用于消息发送,简化了虚拟场景的交互逻辑,提高了操作效率,无需借助收音设备(例如:麦克风)即可实现点对点消息发送,节约了虚拟场景所需的计算资源。
附图说明
图1A是本申请实施例提供的虚拟场景中的消息处理方法的应用模式示意图;
图1B是本申请实施例提供的虚拟场景中的消息处理方法的应用模式示意图;
图2是本申请实施例提供的终端设备400的结构示意图;
图3A至图3F是本申请实施例提供的虚拟场景中的消息处理方法的流程示意图;
图4A是本申请实施例提供的显示在虚拟场景界面中的地图界面的示意图;
图4B是本申请实施例提供的独立于虚拟场景界面的地图界面的示意图;
图5A至5F是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图;
图6A至6G是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图;
图7A是本申请实施例提供的指令控件的排列示意图;
图7B是本申请实施例提供的第二虚拟对象对应的虚拟场景界面的示意图;
图8是本申请实施例提供的虚拟场景中的消息处理方法的一个可选的流程示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有 做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
在以下的描述中,所涉及的术语“第一\第二\第三”仅仅是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
需要指出,在本申请实施例中,涉及到用户信息、用户反馈数据等相关的数据,当本申请实施例运用到具体产品或技术中时,需要获得用户许可或者同意,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
对本申请实施例进行进一步详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释。
1)虚拟场景,利用设备输出的区别于现实世界的场景,通过裸眼或设备的辅助能够形成对虚拟场景的视觉感知,例如通过显示屏幕输出的二维影像,通过立体投影、虚拟现实和增强现实技术等立体显示技术来输出的三维影像;此外,还可以通过各种可能的硬件形成听觉感知、触觉感知、嗅觉感知和运动感知等各种模拟现实世界的感知。
2)响应于,用于表示所执行的操作所依赖的条件或者状态,当满足所依赖的条件或状态时,所执行的一个或多个操作可以是实时的,也可以具有设定的延迟;在没有特别说明的情况下,所执行的多个操作不存在执行先后顺序的限制。
3)虚拟对象,虚拟场景中进行交互的对象,受到用户或机器人程序(例如,基于人工智能的机器人程序)的控制,能够在虚拟场景中静止、移动以及进行各种行为的对象,例如游戏中的各种角色等。
4)地图,用于展示虚拟场景的至少部分区域的地形以及地表的各种元素(例如:建筑物,虚拟载具、虚拟对象)。
5)点对点消息,以点对点的方式从一个终端设备发送至另一终端设备的消息。
本申请实施例提供一种虚拟场景中的消息处理方法、虚拟场景中的消息处理装置、电子设备和计算机可读存储介质及计算机程序产品,能够在虚拟场景中高效地进行点对点的消息发送,从而消除消息对不相关用户带来的干扰。
本申请实施例提供的电子设备可以实施为笔记本电脑、平板电脑、台式计算机、机顶盒、移动设备(例如,移动电话,便携式音乐播放器,个人数字助理、专用消息设备、便携式游戏设备、车载终端)等各种类型的用户终端,也可以实施为服务器。
在一个实施场景中,参见图1A,图1A是本申请实施例提供的虚拟场景中的消息处理方法的应用模式示意图,适用于一些完全依赖于终端设备400的图形处理硬件计算能力即可完成虚拟场景的相关数据计算的应用模式,例如单机版或离线模式的游戏,通过智能手机、平板电脑、虚拟现实设备、增强现实设备等各种不同类型的终端设备400完成虚拟场景的输出。
作为示例,图形处理硬件的类型包括中央处理器(CPU,Central Processing Unit)和图形处理器(GPU,Graphics Processing Unit)。
作为示例,终端设备400上运行有客户端401(例如单机版的游戏应用),在客户端401的运行过程中输出包括有角色扮演的虚拟场景,虚拟场景可以是供游戏角色交互的 环境,例如可以是用于供游戏角色进行对战的平原、街道、山谷等等;以第一人称视角显示虚拟场景为例,在虚拟场景中显示有第一虚拟对象以及第一虚拟对象通过握持部位(例如手部)握持的发射道具(例如可以是射击道具或者投掷道具),其中,第一虚拟对象可以是受用户控制的游戏角色,即第一虚拟对象受控于真实用户,将响应于真实用户针对控制器(例如触控屏、声控开关、键盘、鼠标和摇杆等)的操作而在虚拟场景中运动,例如当真实用户向右移动摇杆时,第一虚拟对象将在虚拟场景中向右部移动,还可以保持原地静止、跳跃以及控制第一虚拟对象进行射击操作等。第二虚拟对象是第一虚拟对象的同阵营虚拟对象。地图界面以浮层形式显示在虚拟场景界面中的部分区域,或者虚拟场景的地图界面显示在独立于虚拟场景界面的界面。
举例来说,例如:第一虚拟对象可以是用户控制的虚拟对象,客户端401在第一虚拟对象对应的地图界面中,显示虚拟场景101的至少部分区域的地图102;响应于部分区域中出现至少一个的第二虚拟对象(与第一虚拟对象同阵营),在地图中显示用于表征第二虚拟对象当前所在的第一位置的位置标记控件。响应于针对位置标记控件的移动操作,将位置标记控件从第一位置移动到第二位置,以及向第二虚拟对象发送消息,其中,消息携带第二位置和指令,消息用于指示第二虚拟对象到达第二位置并执行指令。
在另一个实施场景中,参见图1B,图1B是本申请实施例提供的虚拟场景中的消息处理方法的应用模式示意图,应用于终端设备400和服务器200,适用于依赖于服务器200的计算能力完成虚拟场景计算、并在终端设备400输出虚拟场景的应用模式。
以形成虚拟场景的视觉感知为例,服务器200进行虚拟场景相关显示数据(例如场景数据)的计算并通过网络300发送到终端设备400,终端设备400依赖于图形计算硬件完成计算显示数据的加载、解析和渲染,依赖于图形输出硬件输出虚拟场景以形成视觉感知,例如可以在智能手机的显示屏幕呈现二维的视频帧,或者,在增强现实/虚拟现实眼镜的镜片上投射实现三维显示效果的视频帧;对于虚拟场景的形式的感知而言,可以理解,可以借助于终端设备400的相应硬件输出,例如使用麦克风形成听觉感知,使用振动器形成触觉感知等等。
作为示例,终端设备400上运行有客户端401(例如网络版的游戏应用),通过连接服务器200(例如游戏服务器)与其他用户进行游戏互动,终端设备400输出客户端401的虚拟场景101,在虚拟场景中显示有第一虚拟对象、以及第一虚拟对象通过握持部位(例如手部)握持的发射道具(例如可以是射击道具或者投掷道具),其中,第一虚拟对象可以是受用户控制的游戏角色,即第一虚拟对象受控于真实用户,将响应于真实用户针对控制器(例如触控屏、声控开关、键盘、鼠标和摇杆等)的操作而在虚拟场景中运动,例如当真实用户向右移动摇杆时,第一虚拟对象将在虚拟场景中向右部移动,还可以保持原地静止、跳跃以及控制第一虚拟对象进行射击操作等。第二虚拟对象是第一虚拟对象的同阵营虚拟对象。地图界面以浮层形式显示在虚拟场景界面中的部分区域,或者虚拟场景的地图界面显示在独立于虚拟场景界面的界面。图1A中地图102以浮层形式显示在虚拟场景101中。
举例来说,例如:第一虚拟对象可以是用户控制的虚拟对象,客户端401在第一虚拟对象对应的地图界面中,显示虚拟场景101的至少部分区域的地图102;响应于部分区域中出现至少一个的第二虚拟对象(与第一虚拟对象同阵营),在地图中显示用于表征第二虚拟对象当前所在的第一位置的位置标记控件。响应于针对位置标记控件的移动操作,将位置标记控件从第一位置移动到第二位置,以及向第二虚拟对象发送消息,其中,消息携带第二位置和指令,消息用于指示第二虚拟对象到达第二位置并执行指令。
以计算机程序为应用程序为例,终端设备400运行有支持虚拟场景的应用程序。该应用程序可以是第一人称射击游戏(FPS,First-Person Shooting game)、第三人称射击 游戏、虚拟现实应用程序、三维地图程序或者多人生存游戏中的任意一种。用户使用终端设备400操作位于虚拟场景中的虚拟对象进行活动,该活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、跳跃、驾驶、拾取、射击、攻击、投掷、建造虚拟建筑中的至少一种。示意性的,该虚拟对象可以是虚拟人物,比如仿真人物角色或动漫人物角色等。
在另一些实施例中,本申请实施例还可以借助于云技术(Cloud Technology)实现,云技术是指在广域网或局域网内将硬件、软件、网络等系列资源统一起来,实现数据的计算、储存、处理和共享的一种托管技术。
云技术是基于云计算商业模式应用的网络技术、信息技术、整合技术、管理平台技术、以及应用技术等的总称,可以组成资源池,按需所用,灵活便利。云计算技术将变成重要支撑。技术网络系统的后台服务需要大量的计算、存储资源。云游戏(Cloud gaming)又可称为游戏点播(gaming on demand),是一种以云计算技术为基础的在线游戏技术。云游戏技术使图形处理与数据运算能力相对有限的轻端设备(Thin Client)能运行高品质游戏。在云游戏场景下,游戏并不在玩家游戏终端,而是在云端服务器中运行,并由云端服务器将游戏场景渲染为视频音频流,通过网络传输给玩家游戏终端。玩家游戏终端无需拥有强大的图形运算与数据处理能力,仅需拥有基本的流媒体播放能力与获取玩家输入指令并发送给云端服务器的能力即可。
示例的,图1B中的服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络(Content Delivery Network,CDN)、以及大数据和人工智能平台等基础云计算服务的云服务器。终端设备400以及服务器200可以通过有线或无线通信方式进行直接或间接地连接,本申请实施例中不做限制。
下面对图1A中示出的终端设备400的结构进行说明。参见图2,图2是本申请实施例提供的终端设备400的结构示意图,图2所示的终端设备400包括:至少一个处理器410、存储器450、至少一个网络接口420和用户接口430。终端设备400中的各个组件通过总线系统440耦合在一起。可理解,总线系统440用于实现这些组件之间的连接通信。总线系统440除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图2中将各种总线都标为总线系统440。
处理器410可以是一种集成电路芯片,具有信号的处理能力,例如通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其中,通用处理器可以是微处理器或者任何常规的处理器等。
用户接口430包括使得能够呈现媒体内容的一个或多个输出装置431,包括一个或多个扬声器和/或一个或多个视觉显示屏。用户接口430还包括一个或多个输入装置432,包括有助于用户输入的用户接口部件,比如键盘、鼠标、麦克风、触屏显示屏、摄像头、其他输入按钮和控件。
存储器450可以是可移除的,不可移除的或其组合。存储器450包括易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。在一些实施例中,存储器450能够存储数据以支持各种操作,这些数据的示例包括程序、模块和数据结构或者其子集或超集,下面示例性说明。
操作系统451,包括用于处理各种基本系统服务和执行硬件相关任务的系统程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务;
网络通信模块452,用于经由一个或多个(有线或无线)网络接口420到达其他计 算设备,示例性的网络接口420包括:蓝牙、无线相容性认证(WiFi)、和通用串行总线(USB,Universal Serial Bus)等;
呈现模块453,用于经由一个或多个与用户接口430相关联的输出装置431(例如,显示屏、扬声器等)使得能够呈现信息(例如,用于操作外围设备和显示内容和信息的用户接口);
输入处理模块454,用于对一个或多个来自一个或多个输入装置432之一的一个或多个用户输入或互动进行检测以及翻译所检测的输入或互动。
在一些实施例中,本申请实施例提供的虚拟场景中的消息处理装置可以采用软件方式实现,图2示出了存储在存储器450中的虚拟场景中的消息处理装置455,其可以是程序和插件等形式的软件,包括以下软件模块:显示模块4551、消息发送模块4552,这些模块是逻辑上的,因此根据所实现的功能可以进行任意的组合或进一步拆分。
下面将结合附图对本申请实施例提供的虚拟场景中的消息处理方法进行具体说明。本申请实施例提供的虚拟场景中的消息处理方法可以由图1A中的终端设备400执行。参见图3A,图3A是本申请实施例提供的虚拟场景中的消息处理方法的流程示意图,将结合图3A示出的步骤进行说明。
在步骤301A中,在第一虚拟对象对应的地图界面中,显示虚拟场景的至少部分区域的地图。
示例的,地图是虚拟场景的全部区域的预览图,或者,地图是虚拟场景中位于部分区域的预览图,其中,部分区域是:以第一虚拟对象为中心向外辐射的区域。本申请实施例中,以第一虚拟对象是用户对应的虚拟对象为例进行说明,第二虚拟对象是与第一虚拟对象同阵营的其他虚拟对象,第二虚拟对象可以由其他用户控制或者由人工智能控制,本申请实施例中以第二虚拟对象由其他用户控制为例进行说明。第一虚拟对象是发出消息的虚拟对象,第二虚拟对象是接收消息的虚拟对象。
示例的,参考图5A,图5A是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图;地图501A是虚拟场景的全部区域的浏览图,地图501A的外部边缘设置有地图缩放控件503A,地图缩放控件503A用于调整地图与虚拟场景之间的比例,将地图缩放控件503A的圆形图标向加号504A移动,则放大地图,反之,向减号505A移动则缩小地图。位置标记控件X2是第一虚拟对象的位置标记控件,位置标记控件X2上显示有表征第一虚拟对象在虚拟场景中视野方向的线段,数字2表示第一虚拟对象在团队或者阵营中的编号为2,编号用于区分同一阵营中不同虚拟对象的位置标记控件。位置标记控件X3是标号3的第二虚拟对象。
在一些实施例中,在步骤301A之前,还可以通过以下任意一种方式显示地图界面:
1、在虚拟场景界面中显示虚拟场景,并在覆盖虚拟场景界面的部分区域(例如:界面的上角、下角等区域)的浮层上显示地图界面。
示例的,地图界面可以是持续显示的,或者响应于针对地图界面的呼出操作,显示地图界面;响应于针对地图界面的撤回操作,隐藏地图界面。参考图1B或者图1A,地图102以浮层的方式持续显示在虚拟场景101的右上角。参考图4A,图4A是本申请实施例提供的显示在虚拟场景界面中的地图界面的示意图。虚拟场景界面401A中,响应于针对地图界面402A的呼出操作(例如:点击地图界面对应的快捷键),以浮层的方式在虚拟场景界面401A中显示地图界面402A。
2、在虚拟场景界面中显示虚拟场景,在虚拟场景界面的之外的区域中显示地图界面。参考图4B,图4B是本申请实施例提供的独立于虚拟场景界面的地图界面的示意图。图4B中地图界面402B与虚拟场景界面401B分别对应于不同的标签页,地图界面402B独立于虚拟场景界面401B进行显示。示例的,标签页的显示方式仅是独立于虚拟场景 界面的一种方式,实际实施中,还可以通过其他方式独立显示地图界面402B。
在步骤302A中,响应于部分区域中出现至少一个第二虚拟对象,在地图中显示用于表征第二虚拟对象当前所在的第一位置的位置标记控件。
这里,第二虚拟对象是与第一虚拟对象属于相同阵营的任意一个虚拟对象。
示例的,虚拟对象的位置标记控件随虚拟对象在虚拟场景中的移动,而在地图中同步移动;在地图中除了显示虚拟对象的位置标记控件以外,还可以显示有标记点以及虚拟载具的位置标记控件。标记点是地图上固定位置的点。
在一些实施例中,还可以通过以下方式在地图中生成标记点:响应于针对地图中的第一标记控件的触发操作,显示进入地点标记模式,响应于针对地图的点击操作,在地图上的点击位置显示第一自定义标记点;响应于针对地图中的第二标记控件的触发操作,在地图上第一虚拟对象当前所在的第一位置显示第二自定义标记点。
这里,第一自定义标记点用于在第二虚拟对象对应的地图界面中同步显示。第二自定义标记点用于在第二虚拟对象对应的地图界面中同步显示。
示例的,地图标记模式可以通过以下任意一种方式显示:文字提示进入地图标记模式,地图的背景颜色切换为其他颜色,地图中用于作为位置参考的网格线高光显示。
本申请实施例中,将自定义标记点同步显示在第二虚拟对象的地图界面中,实现了队友之间共享标记点对应的位置信息,便于同阵营的队友基于不同的位置标记进行团队合作。同时,标记点可以作为对地图上不同的位置的参考点,进而用户可以基于参考点将位置标记控件拖动到所需的位置,提升消息中携带的第二位置的准确性。
示例的,参考图5D,图5D是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图;地图501A中,第一标记控件501D、第二标记控件502D分别显示在地图的内边缘。可以通过触发第一标记控件501D进入地点标记模式(参考图5A、图5D,图5D中的地图501A相较于图5A中的地图501A,显示为其他颜色),地点标记模式下,响应于针对地图上任意一个位置的点击操作,在地图上的点击位置显示第一自定义标记点。例如:第一自定义标记点D1。当第二标记控件502D被触发,在第一虚拟对象的位置标记控件X2所在的位置显示第二自定义标记点D2。
本申请实施例中,第二虚拟对象是第一虚拟对象的队友,自定义标记点用于在第二虚拟对象对应的地图界面中同步显示,也即,将用户在自己的地图上标记的自定义标记点共享到其他队友的地图上,同队的每个用户均可以在各自对应的地图上查看到该自定义标记点,实现了标记点的共享,提升了交互效率。
在一些实施例中,还可以通过以下方式显示虚拟载具对应的位置标记控件:响应于部分区域中出现至少一个虚拟载具(例如:汽车、摩托、飞行器等),在地图中显示用于表征虚拟载具处于第二位置的位置标记控件,其中,虚拟载具的位置标记控件的标记类型为虚拟载具位置标记。
示例的,虚拟载具是虚拟场景中用于承载虚拟对象的道具,响应于针对虚拟载具的驾驶操作,在虚拟场景中显示虚拟载具承载虚拟对象进行移动的画面。地图中的虚拟载具的位置标记控件,随虚拟载具在虚拟场景中位置的变化而移动。参考图6A,图6A是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图。虚拟载具的位置标记控件Z1显示在第一虚拟对象的位置标记控件X2的相邻位置。若第一虚拟对象驾驶了位置标记控件Z1对应的虚拟载具,则位置标记控件Z1和位置标记控件X2叠加显示,且位置标记控件X2、位置标记控件Z1同步移动。
在步骤303A中,响应于针对位置标记控件的移动操作,将位置标记控件从第一位置移动到第二位置。
在一些实施例中,参考图3B,图3B是本申请实施例提供的虚拟场景中的消息处理 方法的流程示意图。步骤303A可以通步骤3031B至步骤3033B实现,以下具体说明。
在步骤3031B中,响应于针对位置标记控件的按压操作的持续时长达到按压时长阈值,将按压操作对应的位置标记控件以放大模式显示。
示例的,放大模式是将位置标记控件以相较于原始尺寸的预设倍数的尺寸,预设倍数大于1,例如原始尺寸的1.2倍。参考图5B,图5B是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图。图5B中的位置标记控件X3以放大模式显示,相较于图5A中以原始尺寸显示的位置标记控件X3更大。手型表示针对位置标记控件X3的按压操作,按压时长阈值可以是0.5秒,或者更小的时长,当按压操作的持续时长达到了按压时长阈值,则以放大模式显示位置标记控件X3,且位置标记控件X3可以被移动。
在步骤3032B中,响应于针对位置标记控件的移动操作,控制以放大模式显示的位置标记控件从第一位置开始同步移动。
示例的,第一位置是移动操作的起点位置。同步移动是指,在移动操作过程中,响应于用户持续按压位置标记控件X3,控制位置标记控件X3同步显示在移动操作在地图上对应的按压位置。继续参考图5B,位置标记控件X3`表征被移动后的位置标记控件X3,二者之间的箭头方向是移动操作的方向,虚线是标记控件X3的移动轨迹。
在步骤3033B中,响应于移动操作被释放在第二位置,将以放大模式显示的位置标记控件移动到第二位置。
示例的,移动操作被释放在第二位置,是指用户的手指移动到第二位置时抬起手指以停止按压地图,第二位置是移动操作的终点位置。继续参考图5B,位置标记控件X3`在地图501A上的所在的第二位置是移动操作的终点位置。
在一些实施例中,当地图中显示多个位置标记控件时,还可以通过以下方式删除多余的位置标记控件:响应于针对任意一个位置标记控件的选择操作,以选中状态(例如:反色、高亮、对勾标注、叉形标注等)显示被选中的位置标记控件;响应于针对选中状态的位置标记控件的删除操作,删除选中状态的位置标记控件。
示例的,当显示多个第二虚拟对象的位置控制标记时,可以删除部分的第二虚拟对象的位置标记控件,即,只保留被发送消息的第二虚拟对象的位置标记控件。删除,是在地图中隐藏或屏蔽第二虚拟对象的位置标记控件,或者以虚化方式显示第二虚拟对象的位置标记控件。
参考图6B,图6B是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图。图6B上图中,地图501A包括标记点Q1、位置标记控件X4(表征编号为4的第二虚拟对象)、删除控件601B。在被选中的标记点Q1、位置标记控件X4上显示叉形602B表示选中状态,响应于删除控件601B被触发,删除选中状态的标记点Q1、位置标记控件X4。图6B下图展示了删除了标记点Q1、位置标记控件X4的地图501A。
在一些实施例中,还可以通过以下方式自动删除位置标记控件:响应于针对任意一个位置标记控件的移动操作,隐藏没有被移动的每个第二虚拟对象的位置标记控件。
参考图6C,图6C是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图。图6C的上图中,针对位置标记控件X3施加了按压操作,地图501A包括位置标记控件X5(表征编号为5的第二虚拟对象)、位置标记控件X4;图6C的下图中,在位置标记控件X3移动过程中,隐藏了没有被移动的位置标记控件X5、位置标记控件X4。示例的,若针对位置标记控件X3移动操作被释放,则恢复被隐藏的位置标记控件X5、位置标记控件X4。
本申请实施例中,通过删除部分位置标记控件,避免了过多的位置标记控件对地图造成遮挡,节约了图形计算所消耗的资源,同时,便于用户对地图进行观察、操作,以将接收消息的第二虚拟对象的位置标记控件移动到用户所需的位置,提升了人机交互的 效率。
在步骤304A中,向第二虚拟对象发送消息。
这里,消息用于指示第二虚拟对象到达第二位置并执行指令,消息携带第二位置和指令,消息为点对点消息。
示例的,步骤303A与步骤304A是同时执行的,消息类型包括语音消息、文本消息以及语音文本混合消息。
在一些实施例中,可以通过以下任意一种方式将消息发送至第二虚拟对象:
1、响应于针对位置标记控件的移动操作被释放,显示消息类型选择控件,响应于针对类型选择控件的选择操作,基于选中的消息类型向第二虚拟对象发送消息。其中,消息类型选择控件包括以下消息类型:语音消息、文本消息、语音和文本混合消息。语音和文本混合消息通过以下方式呈现:消息的文本显示在第二虚拟对象的人机交互界面中,同时向第二虚拟对象播放文本消息和对应的语音。
2、响应于针对位置标记控件的移动操作被释放,基于设定的消息类型向第二虚拟对象发送消息。
在一些实施例中,通过消息指示第二虚拟对象的方式包括:
1、以语音或者文本的形式,向第二虚拟对象展示消息内容,其中,消息内容包括指令以及第二位置。例如:文本消息的文本内容是“前往建筑物B(1234,5678)”,“建筑物B(1234,5678)”是第二位置,“前往”表征移动指令,(1234,5678)是建筑物B的在地图上的坐标位置。
2、以语音或者文本的形式,向第二虚拟对象展示包括指令的消息内容,并在第二虚拟对象对应的地图界面中显示以下至少一项:第二位置的位置标记、第二位置相对于第二虚拟对象的位置标记控件的方向、第二虚拟对象的位置标记控件与第二位置之间的路径。该种方式下,语音或者文本消息中可以不包含第二位置。
例如:文本消息的文本内容为“向敌人发起进攻”,并在第二虚拟对象对应的地图界面中显示第二虚拟对象与敌方虚拟对象所在的第二位置之间的路径。文本内容中并未包含明确的第二位置,但通过显示路径的方式向第二虚拟对象指示了第二位置。
3、以语音或者文本的形式,向第二虚拟对象展示消息内容,其中,消息内容包括指令以及第二位置。并在第二虚拟对象对应的地图界面中显示以下至少一项:第二位置的位置标记、第二位置相对于第二虚拟对象的位置标记控件的方向、第二虚拟对象的位置标记控件与第二位置之间的路径。
例如:消息内容为“向平原上(3216,4578)的敌人进攻”,并在第二虚拟对象对应的地图界面中显示第二虚拟对象与敌方虚拟对象所在的第二位置之间的路径。(3216,4578)是第二位置的位置坐标,“平原上”是对第二位置的描述。
在一些实施例中,可以通过以下方式显示第二位置:当第二位置存在位置标记控件或者位置标记时,对位置标记控件或者位置标记进行重点显示(例如:高光显示、以标注框圈出、以其他颜色显示、加粗显示、闪烁显示等);当第二位置不存在位置标记控件或者位置标记时,在第二位置显示一个位置标记。
以下举例说明,基于上文的方式2通过消息指示第二虚拟对象的过程。参考图7B,图7B是本申请实施例提供的第二虚拟对象对应的虚拟场景界面的示意图;虚拟场景701的右上角显示地图702,第一虚拟对象对应的地图中的移动操作相关的画面被同步地显示在第二虚拟对象的地图702上,使得消息更加醒目,便于第二虚拟对象对应的用户及时反应。虚拟场景701中展示了消息的文本内容703“向2号队友处集合”,也即,向发出消息的用户对应的虚拟对象的位置集合,这里的2号队友是指第一虚拟对象。地图702中显示了第二虚拟对象的位置标记控件与第二位置之间的方向和路径。
在一些实施例中,在步骤303之前,可以通过以下方式确定消息携带的指令:在地图的内部或者地图的外部显示指令控件,其中,指令控件包括多个类型的候选指令;响应于针对指令控件的中任意一个候选指令的指令选择操作(指令选择操作可以在移动操作之前或者之后执行),以选中状态显示被选中的候选指令,将被选中的候选指令作为消息携带的指令。
示例的,继续参考图5A,图5A中指令控件502A显示在地图501A的外部。参考图7A,图7A是本申请实施例提供的指令控件的排列示意图。指令控件502A对应的指令类型包括:进攻指令、防守指令以及移动指令。深色部分表征候选指令处于选中状态。选中状态还可以通过高光、粗体、对勾标注等方式显示。
在一些实施例中,针对选中状态,响应于针对指令控件中任意一个候选指令的指令选择操作,在接收到下一次的指令选择操作之前,将选中的候选指令维持在选中状态;或者,在向第二虚拟对象发送点对点消息之后,从显示选中的候选指令处于选中状态,切换为显示默认指令处于选中状态。
这里,默认指令是多个类型的候选指令中被设定为处于自动选中状态的候选指令。
示例的,默认指令可以是所有的候选指令对应的使用概率的降序排序的头部的第一个指令,例如:虚拟场景中经常会用到移动指令,将移动指令作为默认指令。例如,默认指令是移动指令,当用户选择了进攻指令并发出消息之后,将选中状态的进攻指令切换为非选中状态,将移动指令切换为选中状态。又例如,用户选择了移动指令,在下一次指令选择操作之间,将移动指令维持在选中状态。
本申请实施例中,通过自动维持指令控件中候选指令的选中状态,或者将默认指令切换为选中状态,避免了用户反复操作指令控件,节约了消息发送的时间,节约了计算资源。
在一些实施例中,在步骤303之前,可以通过以下方式确定消息携带的指令:在地图的内部或者地图的外部显示指令控件,其中,指令控件包括多个类型的候选指令,且多个类型的候选指令中的一个候选指令处于自动选中状态;响应于在设定时长内未接收到针对指令控件的中任意一个候选指令的指令选择操作,将处于自动选中状态的候选指令作为消息携带的指令。
示例的,设定时长可以是5分钟,假设指令控件中移动指令处于自动选中状态,当5分钟之内没有接收到指令选择操作,将处于自动选中状态的移动指令作为消息携带的指令。
本申请实施例中,通过将指令处于自动选中状态,无需用户频繁操作,就可以为用户选取消息携带的指令,节约了消息发送的时间,节约了计算资源。
在一些实施例中,当指令控件包括多个候选指令时,还可以按照以下任意一种方式对多个候选指令进行排序:
1、按照每个候选指令的使用频率进行降序排序或升序排序。例如:对某一第一虚拟对象的候选指令的使用频率进行统计,若进攻指令的频率、移动指令的频率和防守指令的频率符合从大到小的顺序,则,按照频率从大到小的顺序对候选指令进行排序,并将该排序的指令控件显示在第一虚拟对象的地图中。
2、按照每个候选指令被设定的次序排序。例如:候选指令的次序被用户设定为移动指令、攻击指令以及防守指令。
3、按照每个候选指令的使用概率进行升序排序或降序排序。
示例的,使用概率的排序,是根据每次拖动的第二虚拟对象而自适应变化的。也即,针对不同类型的第二虚拟对象,排序的顺序不同。例如:第二虚拟对象A经常接收到携带进攻指令的消息,参考图7A中的指令控件502A`,第二虚拟对象A对应的排序中, 进攻指令的排序最高,而其他指令的排序较低。或者,第二虚拟对象B经常接收到携带进攻指令的消息,参考图7A中的指令控件502A,第二虚拟对象B对应的排序中,移动指令的排序最高。
本申请实施例中,通过对指令进行排序,将用户的常用指令或者针对某一第二虚拟对象的常用指令显示在指令控件的头部的位置,便于用户快捷地找到所需的指令,便于高效地进行消息发送。
在一些实施例中,可以通过以下方式确定每个候选指令的使用概率:基于虚拟场景中的虚拟对象的参数调用神经网络模型进行预测处理,得到每个候选指令对应的使用概率。
其中,虚拟对象的参数包括以下至少一项:第一虚拟对象的位置以及属性值,其中,属性值包括战斗力、生命值、防御值等;第二虚拟对象的位置以及属性值;第一虚拟对象所属阵营的属性值与敌对阵营的属性值之差(属性值之差可以表征敌我阵营之间的力量对比)。
其中,神经网络模型是基于至少两个阵营的对局数据进行训练得到,对局数据包括:至少两个阵营中多个虚拟对象的位置、属性值以及胜利阵营的虚拟对象所执行的指令、失败阵营的虚拟对象所执行的指令;胜利阵营的虚拟对象所执行的每个指令的标签为概率1,失败阵营的虚拟对象所执行的每个指令的标签为概率0。
示例的,神经网络模型可以是图神经网络模型或者卷积神经网络模型,基于对局数据训练初始的神经网络模型,通过初始的神经网络模型基于对局数据计算预测概率,以作为标签的实际概率的差值,带入损失函数计算损失值,损失函数可以是均方差损失函数、平均绝对误差损失、分位数损失、交叉熵损失函数等。基于损失值在初始的神经网络模型中进行反向传播,通过反向传播(Back Propagation,BP)算法来更新神经网络模型的参数,使得训练后的神经网络模型基于同一阵营的虚拟对象的当前的参数,能够预测每个候选指令当前被第一虚拟对象所使用的使用概率。
本申请实施例中通过神经网络模型获取使用概率,提升了获取使用概率的准确性,基于使用概率对候选指令进行排序,便于用户快捷地找到所需的指令,便于高效地进行消息发送。
在一些实施例中,参考图3C,图3C是本申请实施例提供的虚拟场景中的消息处理方法的流程示意图。在步骤304A之前,可以通过步骤3041C至步骤3042C确定向第二虚拟对象发送的消息,以下具体说明。
在步骤3041C中,基于移动操作、第一位置以及第二位置,确定移动操作在虚拟场景中对应的起点位置特征与终点位置特征,将起点位置特征与终点位置特征作为触发条件。
示例的,移动操作的起点位置是第一位置,移动操作的终点位置是第二位置。位置特征可以是位置所处的区域、位置附近是否存在标记等。
在一些实施例中,步骤3041C可以通过以下方式实现:确定第一位置在虚拟场景中所处的第一区域(例如:非安全区或者安全区)、第二位置在虚拟场景中所处的第二区域(例如:非安全区或者安全区);确定第二位置在地图界面中对应的标记类型,其中,标记类型包括无标记、虚拟对象位置标记、虚拟载具位置标记;将第一区域作为移动操作的起点位置特征,将第二区域、以及标记类型作为移动操作的终点位置特征。
示例的,非安全区中,虚拟对象的生命值会周期性地下降。反之,安全区是虚拟场景中虚拟对象的生命值不会进入周期性下降状态的区域。参考图5C,图5C是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图。位置标记控件X3对应的第一位置在安全区501C,移动操作的终点位置在安全区501C以内(位置标记控件X3`的位置), 终点位置的标记类是无标记。
在一些实施例中,可以通过以下方式确定第二位置在地图界面中对应的标记类型:检测地图中以第二位置为中心的部分区域,例如,参考图6G,图6G是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图,部分区域601G可以是以第二位置为中心的圆形区域,该圆形区域的半径R与误操作的识别精确度正相关。当检测到至少一个位置标记控件时,将检测到的与第二位置最近的位置标记控件对应的标记类型,作为第二位置在地图界面中对应的标记类型;当未检测到位置标记控件时,将无标记作为第二位置在地图界面中对应的标记类型。
继续参考图6G,部分区域中存在位置标记控件X4,则第二位置在地图界面中对应的标记类型为虚拟对象位置标记。
在步骤3042C中,基于触发条件在数据库中进行查询,得到与触发条件匹配的消息。
这里,可以数据库存储有不同的消息与不同的触发条件之间的对应关系。
在一些实施例中,当指令的类型为移动指令且第二位置周围的预设范围内存在虚拟载具时,消息的内容为前往第二位置集合并进入虚拟载具。本申请实施例中以虚拟载具为可驾驶的车辆为例进行说明,继续参考图6A,虚拟载具的位置标记控件Z1显示在第一虚拟对象的位置标记控件X2附近。移动操作将位置标记控件X3移动到位置标记控件X2,则消息内容可以为“向队友2的位置集合、上车”。
在一些实施例中,当指令的类型为移动指令且第二位置不存在虚拟载具时,消息的内容为前往第二位置集合。继续参考图5B,当移动操作的第二位置不存在虚拟载具时,消息的内容可以为“向指定地点移动”。
在一些实施例中,当指令的类型为进攻指令时,消息的内容为前往第二位置并进攻。参考图5E,图5E是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图。指令控件502A当前的选中状态的指令为进攻指令。则,将位置标记控件X3移动到第二位置后,可以在移动后的位置标记控件X3`的附近显示攻击指令的小图标。攻击指令的小图标同步显示在第二虚拟对象的地图中,便于第二虚拟对象确定待进攻的位置。消息内容可以是“向指定位置进攻”。
在一些实施例中,当指令的类型为防守指令时,消息的内容为前往第二位置进行防守。防守指令与进攻指令的处理方式相同,此处不再赘述,对应的消息内容可以是“防守指定位置”。
在一些实施例中,当地图界面中显示虚拟场景的部分区域的地图时,参考图3D,图3D是本申请实施例提供的虚拟场景中的消息处理方法的流程示意图。还可以通过步骤302D至步骤304D向地图外部的第二虚拟对象进行消息发送,以下具体说明。
在步骤302D中,在地图的外部显示未出现虚拟对象的位置标记控件。
这里,未出现虚拟对象是当前未出现在部分区域中的第二虚拟对象。
示例的,参考图6D,图6D是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图。位置标记控件X4是编号4,且处于地图对应的虚拟场景的范围之外的第二虚拟对象。位置标记控件X4显示在地图501A外部的上边缘。
在步骤303D中,响应于针对未出现虚拟对象的位置标记控件的移动操作,将位置标记控件从地图的外部移动到第二位置。
示例的,步骤303D中的移动操作与步骤303A相同,此处不再赘述。
继续参考图6D,手型表示按压操作,响应于移动操作,将位置标记控件X4从地图501A的外部移动到地图501A内部的第二位置(位置标记控件X4`所在位置),位置标记控件X4`用于表征移动后的位置标记控件X4。
在步骤304D中,向未出现虚拟对象发送消息。
这里,消息携带第二位置和指令,消息为点对点消息。
示例的,步骤303D与步骤304D同时被执行。步骤304D确定消息内容的可参考上文步骤3041C至步骤3042C,步骤304D发送消息的方式与步骤304A相同,此处不再赘述。
本申请实施例中,通过在地图外部显示未出现在地图中的虚拟对象的位置标记控件,通过移动操作向地图外部的虚拟对象进行消息发送,覆盖了全虚拟场景中、全阵营虚拟对象的高效地进行消息发送,复用了地图界面,节约了终端渲染虚拟场景的相关计算资源。
在一些实施例中,参考图3E,图3E是本申请实施例提供的虚拟场景中的消息处理方法的流程示意图。当在地图中显示用于表征多个第二虚拟对象当前所在的第一位置的位置标记控件时,步骤303A可以通过步骤3031E、步骤3032E实现,步骤304A可以通过步骤3041E实现,以下具体说明。
在步骤3031E中,响应于批量选择操作,以选中状态显示多个位置标记控件。
示例的,选中状态可以通过高光、粗体、对勾标注等形式表征。参考图6E,图6E是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图。图6E上图中,位置标记控件X3、位置标记控件X4、位置标记控件X5分别标注有对勾601E,上述三个位置标记控件被批量选择,并以选中状态显示。
在步骤3032E中,响应于移动操作,将多个位置标记控件从分别所处的第一位置移动到第二位置。
示例的,移动操作是针对于被选中的多个位置标记控件中的任意一个,继续参考图6E,图6E上图中,手型按压在位置标记控件X3处,移动操作仅作用于位置标记控件X3,位置标记控件X3随移动操作在地图上的移动轨迹而移动。当移动操作被释放在第二位置时,未跟随移动操作移动的选中状态的每个位置标记控件,从每个位置标记控件分别对应的第一位置移动到第二位置。图6E下图中,手型停留在第二位置,也即,移动操作释放在第二位置,位置标记控件X4、位置标记控件X5被移动到了第二位置。
在步骤3041E中,向多个位置标记控件分别对应的第二虚拟对象发送消息。
这里,消息携带第二位置以及指令。每个第二虚拟对象均接收到相同的第二位置和指令。
示例的,步骤3032E与步骤3041E同时被执行。步骤3041E的消息发送方式与上文步骤304A相同,此处不再赘述。
本申请实施例,通过批量选择位置标记控件,实现了复用地图并针对多个队友虚拟对象进行批量的点对点消息发送,提升了消息的发送效率,避免了对与消息无关的队友的干扰,避免占用消息无关队友的客户端的运行内存,避免消息高并发带来的高资源消耗,节约了发送消息所需的计算资源。
在一些实施例中,参考图3F,图3F是本申请实施例提供的虚拟场景中的消息处理方法的流程示意图。在步骤304之后,还可以通过步骤305F至步骤306F向未移动虚拟对象进行消息发送,以下具体说明。
在步骤305F中,在地图中显示与未移动虚拟对象对应的发送消息控件。
这里,未移动虚拟对象是尚未向其发送消息的第二虚拟对象,发送消息控件用于重复发送消息。
示例的,参考图6F,图6F是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图。位置标记控件X3被移动到第二位置,当位置标记控件X3对应的编号3的第二虚拟对象接收到对应的消息时,位置标记控件X3显示在编号3的第二虚拟对象的当前所在位置。在位置标记控件X3被移动的过程中,位置标记控件X4并未被移动, 则位置标记控件X4对应的第二虚拟对象是未移动对象,在位置标记控件X4的附近显示发送消息控件F1。发送消息控件F1用于重复发送上一条发送的消息。
在步骤306F中,响应于针对任意一个发送消息控件的触发操作,向触发的发送消息控件对应的未移动虚拟对象发送消息。
示例的,继续结合图6F进行说明,假设,位置标记控件X3对应的第二虚拟对象接收到的消息是“向指定位置集合”。当位置标记控件X4对应的发送消息控件F1被触发时,位置标记控件X4对应的第二虚拟对象也接收到“向指定位置集合”的消息。
本申请实施例中,通过设置发送消息控件,重复发送上一条所发送的消息,无需用户重新移动地图中的位置标记控件到与上一次移动操作相同的终点位置,就能够发出相同的消息,节约了消息发送所需的操作时间。
本申请实施例通过在地图界面中显示与第一虚拟对象同阵营的第二虚拟对象的位置标记控件,当第二虚拟对象的位置标记控件被移动时,基于移动操作向第二虚拟对象发送对应的指令与消息,利用虚拟场景的地图界面实现了点对点消息的快捷发送,无需发出语音或者输入文字,通过拖动位置标记控件即可进行消息快捷发送,节约了消息发送所需的时间;并且,由于仅向第二虚拟对象进行消息发送,实现了精准的点对点消息发送,避免了对同阵营的其他虚拟对象造成干扰;同时,复用了虚拟场景的地图界面中的位置标记控件,无需在人机交互界面中设置新的控件用于消息发送、无需借助收音设备(例如:麦克风)即可实现点对点消息发送,节约了虚拟场景所需的计算资源。
下面,将说明本申请实施例在多人竞技游戏的示例性应用,在相关技术提供的多人竞技游戏中,包括语音沟通、游戏系统预设的快捷消息、文字输入等沟通方式,但语音沟通受限于收音设备和播放设备,部分玩家可能没有配备麦克风等收音设备,或者没有配备耳机等播放设备。部分玩家不愿意在游戏中展示真实的声音,则会选择文字交流,但文字输入消耗时间;游戏系统预设的快捷键消息有限,无法完整表达玩家想传递的信息。全队可见或者可以听到的消息,有可能会对部分队友造成干扰(一方面全队可见或者可听消息高并发,容易造成队友无法提取有效的消息,另一方面全队可见信息造成计算资源浪费,占用了队友的客户端的运行内存),并且,这些沟通方式无法实现针对某一队友的单独沟通。本申请实施例提供的虚拟场景中的消息处理方法复用了虚拟场景的地图,能够通过移动地图上队友对应的位置标记控件(例如:队友图标控件),向队友快捷地发送点对点消息,以低计算资源消耗的方式提升了消息发送的效率。
下面,以由图1B中的终端设备400和服务器200协同执行本申请实施例提供的虚拟场景中的消息处理方法为例进行说明。参考图8,图8是本申请实施例提供的虚拟场景中的消息处理方法的一个可选的流程示意图,将结合图8示出的步骤进行说明。
在步骤801中,判断针对地图中的队友图标控件的按压操作的持续时长是否大于按压时间阈值。
示例的,地图是虚拟场景所对应的虚拟地图,虚拟地图绑定有坐标系,虚拟场景中每个位置的坐标在虚拟地图中固定不变。队友图标控件是地图中用于表征与用户对应的第一虚拟对象同队(或者同阵营)的第二虚拟对象的位置标记控件,队友图标控件是可以被操作的位置标记控件(例如:移动操作或者按压操作)。
以下结合附图进行说明,参考图5A,图5A是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图;地图501A中,位置标记控件X2是第一虚拟对象的位置标记控件,数字2表示第一虚拟对象在团队或者阵营中的编号为2。位置标记控件X3是标号3的第二虚拟对象。地图501A的外部边缘设置有地图缩放控件503A以及指令控件502A,地图缩放控件503A用于调整地图与虚拟场景之间的比例,将地图缩放控件503A的圆形图标向加号504A移动,则放大地图,反之,向减号505A移动则缩小地图。 指令控件502A用于切换向队友发送的消息中携带的指令的类型。
示例的,按压时间阈值可以是0.5秒。当用户按住队友图标控件达0.5秒时,判定接收到图标触发操作,则队友图标控件可以根据移动操作而在地图上移动。响应于图标触发操作,将队友图标控件以放大模式显示,队友图标控件随移动操作移动(移动操作,也即,维持按压操作,并将按压的位置在人机交互界面上进行滑动或者拖动)。参考图5B,图5B是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图。位置标记控件X3以放大模式显示,相较于图5A中的位置标记控件X3更大。
示例的,本申请实施例中以放大模式显示队友图标控件,使得被操控的位置标记控件更醒目,便于用户进行操作,提升了交互效率。
在步骤802中,响应于针对队友图标控件的移动操作,将队友图标控件移动到移动操作的终点位置。
示例的,移动操作可以是连续的拖动操作或者滑动操作。
继续参考图5B,手型表示用户手指针对位置标记控件X3的按压操作。用户维持针对位置标记控件X3的按压操作的时长大于0.5秒时,位置标记控件X3可以被移动,并从位置标记控件X3当前所在的第一位置,移动手指到箭头方向的第二位置,位置标记控件X3跟随移动操作在人机交互界面上的位置移动。当移动操作停止或者被释放时,将停止或者被释放的位置作为移动操作的终点位置,也即第二位置。第二位置的位置标记控件X3`是被移动后的位置标记控件,在针对第二虚拟对象的消息被发出之前,位置标记控件X3`暂时显示在第二位置。当消息发送完成时,第二虚拟对象的位置标记控件恢复到第二虚拟对象在地图中对应的当前位置。
在步骤803中,判断当前选择的指令类型。
示例的,参考图7A,图7A是本申请实施例提供的指令控件的排列示意图。指令控件502A对应的指令类型包括:进攻指令、防守指令以及移动指令。
当指令类型为移动时,执行步骤804,确定移动操作的起点位置特征和终点位置特征。
示例的,起点位置特征是指起点位置在虚拟场景中对应的区域,例如:安全区、非安全区;非安全区中,虚拟对象的生命值会周期性地下降。反之,安全区是虚拟场景中虚拟对象的生命值不会进入周期性下降状态的区域。
示例的,终点位置特征是指终点位置在虚拟场景中对应的区域(例如:安全区、非安全区)以及终点位置(以终点位置为圆心的圆形区域内)是否存在虚拟对象的位置标记控件、虚拟载具的位置标记控件或标记点。标记点是地图中用于表征位置的点。参考图5D,图5D是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图。响应于针对第一标记控件501D的触发操作,进入标记模式,响应于针对地图上任意位置的选择操作,显示被选中的位置对应的第一自定义标记点,例如:第一自定义标记点D1;响应于针对第二标记控件502D的触发操作,在第一虚拟对象的位置标记控件X2所在的位置显示第二自定义标记点D2。图5D中,移动操作对应的终点位置特征为:在安全区内、存在标记点,标记点为第一自定义标记点D1。
示例的,当终点位置存在对应的位置标记控件或者标记点时,消息中对应地有与位置标记控件或者标记点相关的内容。例如:终点位置存在虚拟载具,则消息中可能会包括“上车”、“前往载具地点上载具”等内容。终点位置在安全区内,则消息中可能会包括“进入安全区”等内容。
在步骤805中,基于起点位置特征和终点位置特征,在消息触发条件库中匹配对应的消息。
示例的,预先为每一条可触发的消息发送的触发条件汇总成数据库(消息触发条件 库)。消息触发条件库中存储有消息、以及消息对应的触发条件。当移动操作的终点位置特征(或者终点位置特征以及起点位置特征)满足消息对应的触发条件时,则向被移动的队友图标控件对应的队友发送对应的消息。在识别到针对队友图标控件的滑动操作后,将滑动操作的起始位置、终点位置作为滑动操作的触发条件,在触发条件库匹配相同的触发条件。起始位置用于确定发送的消息中的虚拟对象的行为内容(进圈/移动)。终止位置用于确定消息中的目的地名词(指定地点/虚拟对象位置/载具)。
以移动指令对应的消息举例,触发条件与消息之间的关系如下所示:
1、队友图标控件被移动到的终点位置存在标记点,起点位置与终点位置均在安全区内,对应的消息为“移动到标记点位置”。
2、队友图标控件被移动到的终点位置存在载具起点位置与终点位置均在安全区内,对应的消息为“到载具位置上载具”。
3、队友图标控件被移动到的终点位置为第一虚拟对象的位置,对应的消息为“向我集合”。
4、队友图标控件的起点位置在安全区外,且终点位置在安全区内,对应的消息为“进入安全圈”。
5、队友图标控件的起点位置在安全区且被移动到的终点位置存在其他的队友图标控件,对应的消息为“来某队友处集合”。
在一些实施例中,当指令类型为移动指令时,不同的起点位置特征和终点位置特征对应不同的消息,以下具体说明。
响应于针对指定队友的队友图标控件的移动操作,且移动操作的起点位置在地图中虚拟场景的安全区外,终点位置在地图中虚拟场景的安全区内,向指定队友发送“快进安全区”的消息。参考图5C,图5C是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图。位置标记控件X3从安全区501C以外移动到了安全区501C以内。则可以向指定队友发送“快进安全区”的文本消息,并将移动操作对应的终点位置显示在指定队友对应的地图界面中。
响应于针对指定队友的队友图标控件的移动操作,且移动操作的起点位置在安全区中,终点位置存在地点标,终点位置在地图中虚拟场景的安全区内,将向指定队友发送“前往指定地点”的消息,指定地点是地点标记对应的位置,并在指定队友的地图界面中重点显示地点标记(例如:对地点标记加粗、通过不同的颜色显示地点标记、高光显示地点标记);同理,若队友在安全区外,则发送“进安全区并前往指定地点”的消息。若移动操作的终点位置在第一虚拟对象所在的位置,当第一虚拟对象所在的位置没有载具时,向指定队友发送“到我这集合”的消息。当第一虚拟对象所在的位置有载具时,向指定队友发送“快上车”或者“快到某地点上车”的消息,这里,某地点指代虚拟场景中的位置。
在一些实施例中,当队友图标控件根据移动操作移动时,服务器开始基于移动操作的起点位置特征与消息触发条件库中的触发条件进行对比,当移动操作结束时,在查找得到的起点位置特征对应的多种消息中,基于移动操作的终点位置特征继续查找,得到匹配到的触发条件,并将匹配到的触发条件对应的消息发送出去。例如:用户对应的第一虚拟对象的位置位于安全区内,针对安全区外的队友图标控件施加了移动操作,该移动操作的终点位置是第一虚拟对象的位置标记控件当前所在的位置。此时满足“起点位置在安全区外且终点位置在安全区内”、“终点位置为第一虚拟对象对应的位置”两个条件。因此向该队友发出消息的文本内容“快进安全区、到我这集合”,并在第二虚拟对象(接收第一虚拟对象消息的队友)对应的地图界面中重点显示(例如:高光显示、以标注框圈出位置标记控件、以不同的颜色显示或者加粗显示等)第一虚拟对象对应的位 置标记控件,便于用户控制第二虚拟对象前往第一虚拟对象所在的位置。
在步骤806中,向队友图标控件对应的队友发送匹配到的消息。
示例的,消息发送是在移动操作停止(例如:用户将位置标记控件移动到某一位置后停止移动)或者被释放(例如:用户松开了按压位置标记控件的手指)时进行的。
示例的,消息发送的方式可以是语音消息、文本消息以及语音混合文本消息。通过消息指示第二虚拟对象的方式包括:
1、以语音或者文本的形式,向第二虚拟对象展示消息内容,其中,消息内容包括指令以及第二位置。
例如:文本消息的文本内容是“前往建筑物B(1234,5678)”,“建筑物B(1234,5678)”是第二位置,“前往”表征移动指令,(1234,5678)是建筑物B的在地图上的坐标位置。
再例如:文本消息的文本内容是“前往建筑物A的二楼”,“前往”表征移动指令,“建筑物A的二楼”是第二位置明确的位置。
2、以语音或者文本的形式,向第二虚拟对象展示包括指令的消息内容,并在第二虚拟对象对应的地图界面中显示以下至少一项:第二位置的位置标记、第二位置相对于第二虚拟对象的位置标记控件的方向、第二虚拟对象的位置标记控件与第二位置之间的路径。该种方式下,语音或者文本消息中可以不包含第二位置,或者不包含明确的第二位置。
例如:参考图7B,图7B是本申请实施例提供的第二虚拟对象对应的虚拟场景界面的示意图;虚拟场景701的右上角显示地图702,第一虚拟对象对应的地图中的移动操作相关的画面被同步地显示在第二虚拟对象的地图702上,使得消息更加醒目,便于第二虚拟对象对应的用户及时反应。虚拟场景701中展示了消息的文本内容703“向2号队友处集合”,这里的2号队友是指第一虚拟对象,也即,向发出消息的用户对应的虚拟对象的位置集合。地图702中显示了第二虚拟对象的位置标记控件与第二位置之间的方向、路径。
3、以语音或者文本的形式,向第二虚拟对象展示消息内容,其中,消息内容包括指令以及第二位置。并在第二虚拟对象对应的地图界面中显示以下至少一项:第二位置的位置标记、第二位置相对于第二虚拟对象的位置标记控件的方向、第二虚拟对象的位置标记控件与第二位置之间的路径。
例如:消息的文本内容是“向指定位置(1472,2147)集合”,文本内容以文字形式或者语音形式展示在第二虚拟对象对应的人机交互界面,并且,在第二虚拟对象的地图中显示指定位置的位置标记、第二虚拟对象的位置标记控件与指定位置对应的位置标记之间的路径、指定位置相对于第二虚拟对象的位置标记控件的方向。其中,(1472,2147)是指定位置在地图上对应的位置坐标,参考图5F,图5F是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图。接收消息的第二虚拟对象的地图界面中同步地显示第二位置的位置标记501F。在第二位置显示了位置标记501F(图5F中位置标记为圆圈,具体实施中位置标记也可以通过高光、标记框等形式呈现,还可以通过不同的颜色进行标记,使得第二位置更醒目,便于控制第二虚拟对象的用户查看),位置标记501F与位置标记控件X3之间的虚线是二者之间的路径,位置标记控件X3指向位置标记501F的箭头表征二者之间的方向。
示例的,位置标记控件是跟随被标记的对象在虚拟场景中的位置,对应地在地图上进行移动的控件。当移动操作被释放或者停止,且消息已发送到对应的队友时,将位置标记控件恢复至第二虚拟对象的当前位置。继续参考图5B,当消息发送完成时,第二位置的位置标记控件X3`被隐藏,若第二虚拟对象的当前位置在消息发送过程中维持在 第一位置,则在第一位置恢复显示退出放大模式的位置标记控件X3(退出放大模式,也即以原始尺寸显示位置标记控件X3)。
当指令类型为进攻时,执行步骤807,基于移动操作的终点位置特征,在消息触发条件库中匹配对应的消息。
示例的,步骤807确定终端位置特征的方式与上文步骤804相同、消息匹配原理与上文步骤805相同,此处不再赘述。防守指令与进攻指令均为针对虚拟对象进行操作的指令,防守指令对应的消息匹配原理与进攻指令相同,此处不再赘述。
在一些实施例中,响应于针对指定队友的队友图标控件的移动操作,当移动操作的终点位置存在标记点时,向指定队友发送“进攻标记位置”的消息。响应于针对指定队友的队友图标控件的移动操作,当移动操作的终点位置存在敌对虚拟对象时,向指定队友发送“向敌人进攻”的文本内容,并在指定队友对应的地图界面中同步地显示敌对虚拟对象所在的第二位置的位置标记。参考图5E,图5E是本申请实施例提供的虚拟场景中的消息处理方法的地图示意图。指令控件502A当前的选中状态的指令为进攻指令。则,将位置标记控件X3移动到第二位置后,可以在第二位置显示攻击指令的小图标。攻击指令的小图标同步显示在第二虚拟对象的地图中,便于第二虚拟对象确定待进攻的位置。继续参考图5F,当移动操作被释放时,第一虚拟对象的地图界面中显示第二位置的位置标记501F,同时,第二位置的位置标记501F被同步地显示在接收消息的第二虚拟对象对应的地图界面中。
在一些实施例中,针对防守指令,响应于针对指定队友的队友图标控件的移动操作,当移动操作的终点位置存在标记点时,向指定队友发送“防守标记位置”的消息。响应于针对指定队友的队友图标控件的移动操作,当移动操作的终点位置存在其他的队友图标控件时,向指定队友发送“保护某队友”的消息,这里某队友指代队友编号或者名字。
本申请实施例中,基于指令的类型对消息进行分类查询,提升了在消息触发条件库中查询消息的效率,使消息在移动操作被释放或者终止时,即刻发出,提升了消息的发送效率。
在步骤807之后,执行步骤806,向队友图标控件对应的队友发送匹配到的消息。
上文已经说明消息发送的具体方式,此处不再赘述。
本申请实施例通过复用虚拟场景的地图中的位置标记控件,使得用户可以通过针对地图上的表征队友的位置标记控件的移动操作,快捷地向队友发送点对点消息,点对点的消息发送方式避免了对无关玩家(不需要接收该消息的玩家)造成干扰,避免了对无关玩家的客户端的运行内存造成负担,同时,节约了虚拟场景所需的图形计算资源,不会受到收音设备或者播放设备的限制,实现了在虚拟场景中高效地进行消息发送。
下面继续说明本申请实施例提供的虚拟场景中的消息处理装置455的实施为软件模块的示例性结构,在一些实施例中,如图2所示,存储在存储器430的虚拟场景中的消息处理装置455中的软件模块可以包括:显示模块4551,配置为在第一虚拟对象对应的地图界面中,显示虚拟场景的至少部分区域的地图;显示模块4551,还配置为响应于部分区域中出现至少一个第二虚拟对象,在地图中显示用于表征第二虚拟对象当前所在的第一位置的位置标记控件,其中,第二虚拟对象是与第一虚拟对象属于相同阵营的任意一个虚拟对象;消息发送模块4552,配置为响应于针对位置标记控件的移动操作,将位置标记控件从第一位置移动到第二位置,以及向第二虚拟对象发送消息,其中,消息携带第二位置和指令。
在一些实施例中,消息发送模块4552还配置为在地图的内部或者地图的外部显示指令控件,其中,指令控件包括多个类型的候选指令;响应于针对指令控件的中任意一个候选指令的指令选择操作,将被选中的候选指令作为消息携带的指令。
在一些实施例中,消息发送模块4552还配置为响应于针对指令控件中任意一个候选指令的指令选择操作,在接收到下一次的指令选择操作之前,将选中的候选指令维持在选中状态;或者,在向第二虚拟对象发送消息之后,从显示选中的候选指令处于选中状态,切换为显示默认指令处于选中状态,其中,默认指令是多个类型的候选指令中被设定为处于自动选中状态的候选指令。
在一些实施例中,消息发送模块4552还配置为在地图的内部或者地图的外部显示指令控件,其中,指令控件包括多个类型的候选指令,且多个类型的候选指令中的一个候选指令处于自动选中状态;响应于在设定时长内未接收到针对指令控件的中任意一个候选指令的指令选择操作,将处于自动选中状态的候选指令作为消息携带的指令。
在一些实施例中,消息发送模块4552还配置为当指令控件包括多个候选指令时,按照以下任意一种方式对多个候选指令进行排序:按照每个候选指令的使用频率进行降序排序或升序排序;按照每个候选指令被设定的次序排序;按照每个候选指令的使用概率进行升序排序或降序排序。
在一些实施例中,消息发送模块4552还配置为基于虚拟场景中的虚拟对象的参数调用神经网络模型进行预测处理,得到每个候选指令对应的使用概率;其中,虚拟对象的参数包括以下至少一项:第一虚拟对象的位置以及属性值,其中,属性值包括战斗力、生命值;第二虚拟对象的位置以及属性值;第一虚拟对象所属阵营的属性值与敌对阵营的属性值之差;其中,神经网络模型是基于至少两个阵营的对局数据进行训练得到,对局数据包括:至少两个阵营中多个虚拟对象的位置、属性值以及胜利阵营的虚拟对象所执行的指令、失败阵营的虚拟对象所执行的指令;胜利阵营的虚拟对象所执行的每个指令的标签为概率1,失败阵营的虚拟对象所执行的每个指令的标签为概率0。
在一些实施例中,当地图中显示多个位置标记控件时,消息发送模块4552还配置为响应于针对任意一个位置标记控件的选择操作,以选中状态显示被选中的位置标记控件;响应于针对选中状态的位置标记控件的删除操作,删除选中状态的位置标记控件。
在一些实施例中,当地图界面中显示虚拟场景的部分区域的地图时,消息发送模块4552还配置为在地图的外部显示未出现虚拟对象的位置标记控件,其中,未出现虚拟对象是当前未出现在部分区域中的第二虚拟对象;响应于针对未出现虚拟对象的位置标记控件的移动操作,将位置标记控件从地图的外部移动到第二位置,以及向未出现虚拟对象发送消息,其中,所述消息用于指示未出现虚拟对象到达第二位置并执行指令。
在一些实施例中,当在地图中显示用于表征多个第二虚拟对象当前所在的第一位置的位置标记控件时,消息发送模块4552还配置为响应于批量选择操作,以选中状态显示多个位置标记控件;响应于移动操作,将多个位置标记控件从分别所处的第一位置移动到第二位置,以及向多个位置标记控件分别对应的第二虚拟对象发送消息,其中,消息用于指示多个位置标记控件分别对应的第二虚拟对象到达第二位置并执行指令。
在一些实施例中,在向第二虚拟对象发送消息之后,消息发送模块4552还配置为在地图中显示与未移动虚拟对象对应的发送消息控件,其中,未移动虚拟对象是尚未向其发送消息的第二虚拟对象,发送消息控件用于重复发送消息;响应于针对任意一个发送消息控件的触发操作,向触发的发送消息控件对应的未移动虚拟对象发送消息。
在一些实施例中,当指令的类型为移动指令且第二位置存在虚拟载具时,消息的内容为前往第二位置集合并进入虚拟载具;当指令的类型为移动指令且第二位置不存在虚拟载具时,消息的内容为前往第二位置集合;当指令的类型为防守指令时,消息的内容为前往第二位置进行防守;当指令的类型为进攻指令时,消息的内容为前往第二位置并进攻。
在一些实施例中,消息发送模块4552还配置为通过以下任意一种方式将消息发送 至第二虚拟对象:响应于针对位置标记控件的移动操作被释放,显示消息类型选择控件,其中,消息类型选择控件包括以下消息类型:语音消息、文本消息、语音和文本混合消息;响应于针对类型选择控件的选择操作,基于选中的消息类型向第二虚拟对象发送消息;响应于针对位置标记控件的移动操作被释放,基于设定的消息类型向第二虚拟对象发送消息。
在一些实施例中,显示模块4551还配置为在在第一虚拟对象对应的地图界面中,显示虚拟场景的至少部分区域的地图之前,通过以下任意一种方式显示地图界面:在虚拟场景界面中显示虚拟场景,并在覆盖虚拟场景界面的部分区域的浮层上显示地图界面;在虚拟场景界面中显示虚拟场景,在虚拟场景界面的之外的区域中显示地图界面。
在一些实施例中,地图是虚拟场景的全部区域的预览图,或者,地图是虚拟场景中位于部分区域的预览图,其中,部分区域是:以第一虚拟对象为中心向外辐射的区域。
在一些实施例中,消息发送模块4552还配置为响应于针对位置标记控件的按压操作的持续时长达到按压时长阈值,将按压操作对应的位置标记控件以放大模式显示;响应于针对位置标记控件的移动操作,控制以放大模式显示的位置标记控件从第一位置开始同步移动;响应于移动操作被释放在第二位置,将以放大模式显示的位置标记控件移动到第二位置。
在一些实施例中,消息发送模块4552还配置为在向第二虚拟对象发送消息之前,基于移动操作、第一位置以及第二位置,确定移动操作在虚拟场景中对应的起点位置特征与终点位置特征,将起点位置特征与终点位置特征作为触发条件;基于触发条件在数据库中进行查询,得到与触发条件匹配的消息;其中,数据库中存储有不同的消息与不同的触发条件之间的对应关系。
在一些实施例中,消息发送模块4552还配置为确定第一位置在虚拟场景中所处的第一区域、第二位置在虚拟场景中所处的第二区域;确定第二位置在地图界面中对应的标记类型,其中,标记类型包括无标记、虚拟对象位置标记、虚拟载具位置标记;将第一区域作为移动操作的起点位置特征,将第二区域、以及标记类型作为移动操作的终点位置特征。
在一些实施例中,消息发送模块4552还配置为检测地图中以第二位置为中心的部分区域;当检测到至少一个位置标记控件时,将检测到的与第二位置最近的位置标记控件对应的标记类型,作为第二位置在地图界面中对应的标记类型;当未检测到位置标记控件时,将无标记作为第二位置在地图界面中对应的标记类型。
在一些实施例中,显示模块4551还配置为响应于部分区域中出现至少一个虚拟载具,在地图中显示用于表征虚拟载具处于第二位置的位置标记控件,其中,虚拟载具的位置标记控件的标记类型为虚拟载具位置标记。
在一些实施例中,消息发送模块4552还配置为响应于针对地图中的第一标记控件的触发操作,显示进入地点标记模式,响应于针对地图的点击操作,在地图上的点击位置显示第一自定义标记点,其中,第一自定义标记点用于在第二虚拟对象对应的地图界面中同步显示;响应于针对地图中的第二标记控件的触发操作,在地图上第一虚拟对象当前所在的第一位置显示第二自定义标记点,其中,第二自定义标记点用于在第二虚拟对象对应的地图界面中同步显示。
本申请实施例提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行本申请实施例上述的虚拟场景中的消息处理方法。
本申请实施例提供一种存储有可执行指令的计算机可读存储介质,其中存储有可执 行指令,当可执行指令被处理器执行时,将引起处理器执行本申请实施例提供的虚拟场景中的消息处理方法,例如,如图3A示出的虚拟场景中的消息处理方法。
在一些实施例中,计算机可读存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、闪存、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
在一些实施例中,可执行指令可以采用程序、软件、软件模块、脚本或代码的形式,按任意形式的编程语言(包括编译或解释语言,或者声明性或过程性语言)来编写,并且其可按任意形式部署,包括被部署为独立的程序或者被部署为模块、组件、子例程或者适合在计算环境中使用的其它单元。
作为示例,可执行指令可被部署为在一个计算设备上执行,或者在位于一个地点的多个计算设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个计算设备上执行。
综上所述,通过本申请实施例通过在地图中显示与第一虚拟对象同阵营的第二虚拟对象的位置标记控件,或者在地图外显示第二虚拟对象的位置标记控件,当第二虚拟对象的位置标记控件被移动时,基于移动操作向第二虚拟对象发送对应的指令与消息,利用虚拟场景的地图界面实现了点对点消息的快捷发送,无需发出语音或者输入文字,通过拖动位置标记控件即可进行消息快捷发送,节约了消息发送所需的时间;并且,由于仅向第二虚拟对象进行消息发送,实现了精准的点对点消息发送,避免了对同阵营的其他虚拟对象造成干扰;同时,复用了虚拟场景的地图界面中的位置标记控件,无需在人机交互界面中设置新的控件用于消息发送、无需借助收音设备(例如:麦克风)即可实现点对点消息发送,节约了虚拟场景所需的计算资源。
以上所述,仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和范围之内所作的任何修改、等同替换和改进等,均包含在本申请的保护范围之内。

Claims (25)

  1. 一种虚拟场景中的消息处理方法,由电子设备执行,所述方法包括:
    在第一虚拟对象对应的地图界面中,显示虚拟场景的至少部分区域的地图;
    响应于所述部分区域中出现至少一个第二虚拟对象,在所述地图中显示用于表征所述第二虚拟对象当前所在的第一位置的位置标记控件,其中,所述第二虚拟对象是与所述第一虚拟对象属于相同阵营的任意一个虚拟对象;
    响应于针对所述位置标记控件的移动操作,将所述位置标记控件从所述第一位置移动到第二位置,向所述第二虚拟对象发送消息,其中,所述消息用于指示所述第二虚拟对象到达所述第二位置并执行指令。
  2. 根据权利要求1所述的方法,其中,在响应于针对所述位置标记控件的移动操作,将所述位置标记控件从所述第一位置移动到第二位置之前,所述方法还包括:
    在所述地图的内部或者所述地图的外部显示指令控件,其中,所述指令控件包括多个类型的候选指令;
    响应于针对所述指令控件的中任意一个所述候选指令的指令选择操作,将被选中的所述候选指令作为所述消息携带的所述指令。
  3. 如权利要求2所述的方法,其中,所述方法还包括:
    响应于针对所述指令控件中任意一个所述候选指令的指令选择操作,在接收到下一次的所述指令选择操作之前,将选中的所述候选指令维持在选中状态;或者,
    在向所述第二虚拟对象发送所述消息之后,从显示选中的所述候选指令处于选中状态,切换为显示默认指令处于选中状态,其中,所述默认指令是所述多个类型的候选指令中被设定为处于自动选中状态的候选指令。
  4. 根据权利要求1所述的方法,其中,在响应于针对所述位置标记控件的移动操作,将所述位置标记控件从所述第一位置移动到第二位置之前,所述方法还包括:
    在所述地图的内部或者所述地图的外部显示指令控件,其中,所述指令控件包括多个类型的候选指令,且所述多个类型的候选指令中的一个所述候选指令处于自动选中状态;
    响应于在设定时长内未接收到针对所述指令控件的中任意一个所述候选指令的指令选择操作,将处于自动选中状态的所述候选指令作为所述消息携带的所述指令。
  5. 根据权利要求2至4任一项所述的方法,其中,所述方法还包括:
    当所述指令控件包括多个所述候选指令时,按照以下任意一种方式对所述多个候选指令进行排序:
    按照每个所述候选指令的使用频率进行降序排序或升序排序;
    按照每个所述候选指令被设定的次序排序;
    按照每个所述候选指令的使用概率进行升序排序或降序排序。
  6. 根据权利要求5所述的方法,其中,所述方法还包括:
    基于所述虚拟场景中的虚拟对象的参数调用神经网络模型进行预测处理,得到每个所述候选指令对应的使用概率;
    其中,所述虚拟对象的参数包括以下至少一项:所述第一虚拟对象的位置以及属性值,所述属性值包括战斗力、生命值;所述第二虚拟对象的位置以及属性值;所述第一虚拟对象所属阵营的属性值与敌对阵营的属性值之差。
  7. 根据权利要求5所述的方法,其中,在所述基于所述虚拟场景中的虚拟对象的参数调用神经网络模型进行预测处理之前,所述方法还包括:
    基于至少两个阵营的对局数据进行训练所述神经网络模型,其中,所述对局数据包括:所述至少两个阵营中多个所述虚拟对象的位置、属性值以及胜利阵营的虚拟对象所执行的指令、失败阵营的虚拟对象所执行的指令;所述胜利阵营的虚拟对象所执行的每个所述指令的标签为概率1,所述失败阵营的虚拟对象所执行的每个所述指令的标签为概率0。
  8. 根据权利要求1至4任一项所述的方法,其中,当所述地图中显示多个所述位置标记控件时,所述方法还包括:
    响应于针对任意一个所述位置标记控件的选择操作,以选中状态显示被选中的所述位置标记控件;
    响应于针对选中状态的所述位置标记控件的删除操作,删除选中状态的所述位置标记控件。
  9. 根据权利要求1至4任一项所述的方法,其中,当所述地图界面中显示所述虚拟场景的部分区域的地图时,所述方法还包括:
    在所述地图的外部显示未出现虚拟对象的位置标记控件,其中,所述未出现虚拟对象是当前未出现在所述部分区域中的所述第二虚拟对象;
    响应于针对所述未出现虚拟对象的所述位置标记控件的移动操作,将所述位置标记控件从所述地图的外部移动到所述第二位置,以及
    向所述未出现虚拟对象发送消息,其中,所述消息用于指示所述未出现虚拟对象到达所述第二位置并执行所述指令。
  10. 根据权利要求1至4任一项所述的方法,其中,当在所述地图中显示用于表征多个所述第二虚拟对象当前所在的第一位置的位置标记控件时,所述响应于针对所述位置标记控件的移动操作,将所述位置标记控件从所述第一位置移动到第二位置,以及向所述第二虚拟对象发送消息,包括:
    响应于批量选择操作,以选中状态显示多个所述位置标记控件;
    响应于移动操作,将多个所述位置标记控件从分别所处的所述第一位置移动到所述第二位置,以及
    向多个所述位置标记控件分别对应的所述第二虚拟对象发送所述消息,其中,所述消息用于指示多个所述位置标记控件分别对应的所述第二虚拟对象到达所述第二位置并执行所述指令。
  11. 根据权利要求1至4任一项所述的方法,其中,在向所述第二虚拟对象发送消息之后,所述方法还包括:
    在所述地图中显示与未移动虚拟对象对应的发送消息控件,其中,所述未移动虚拟对象是尚未向其发送所述消息的所述第二虚拟对象,所述发送消息控件用于重复发送所述消息;
    响应于针对任意一个所述发送消息控件的触发操作,向触发的所述发送消息控件对应的所述未移动虚拟对象发送所述消息。
  12. 根据权利要求1至4任一项所述的方法,其中,当所述指令的类型为移动指令且所述第二位置存在虚拟载具时,所述消息的内容为前往所述第二位置集合并进入所述虚拟载具;
    当所述指令的类型为移动指令且所述第二位置不存在虚拟载具时,所述消息的内容为前往所述第二位置集合;
    当所述指令的类型为防守指令时,所述消息的内容为前往所述第二位置进行防守;
    当所述指令的类型为进攻指令时,所述消息的内容为前往所述第二位置并进攻。
  13. 根据权利要求1至4任一项所述的方法,其中,所述方法还包括:
    通过以下任意一种方式将所述消息发送至所述第二虚拟对象:
    响应于针对所述位置标记控件的移动操作被释放,显示消息类型选择控件,其中,所述消息类型选择控件包括以下消息类型:语音消息、文本消息、语音和文本混合消息;响应于针对所述类型选择控件的选择操作,基于选中的所述消息类型向所述第二虚拟对象发送所述消息;
    响应于针对所述位置标记控件的移动操作被释放,基于设定的所述消息类型向所述第二虚拟对象发送所述消息。
  14. 根据权利要求1至4任一项所述的方法,其中,在所述在第一虚拟对象对应的地图界面中,显示虚拟场景的至少部分区域的地图之前,所述方法还包括:
    通过以下任意一种方式显示所述地图界面:
    在虚拟场景界面中显示所述虚拟场景,并在覆盖所述虚拟场景界面的部分区域的浮层上显示所述地图界面;
    在所述虚拟场景界面中显示所述虚拟场景,在所述虚拟场景界面的之外的区域中显示所述地图界面。
  15. 根据权利要求1至4任一项所述的方法,其中,所述地图是所述虚拟场景的全部区域的预览图,或者,所述地图是所述虚拟场景中位于部分区域的预览图,其中,所述部分区域是:以所述第一虚拟对象为中心向外辐射的区域。
  16. 如权利要求1至4任一项所述的方法,其中,所述响应于针对所述位置标记控件的移动操作,将所述位置标记控件从所述第一位置移动到第二位置,包括:
    响应于针对所述位置标记控件的按压操作的持续时长达到按压时长阈值,将所述按压操作对应的位置标记控件以放大模式显示;
    响应于针对所述位置标记控件的移动操作,控制以放大模式显示的所述位置标记控件从所述第一位置开始同步移动;
    响应于所述移动操作被释放在所述第二位置,将以放大模式显示的所述位置标记控件移动到所述第二位置。
  17. 如权利要求1至4任一项所述的方法,其中,在向所述第二虚拟对象发送消息之前,所述方法还包括:
    基于所述移动操作、所述第一位置以及所述第二位置,确定所述移动操作在所述虚拟场景中对应的起点位置特征与终点位置特征,将所述起点位置特征与所述终点位置特征作为触发条件;
    基于所述触发条件在数据库中进行查询,得到与所述触发条件匹配的所述消息;其中,所述数据库中存储有不同的所述消息与不同的所述触发条件之间的对应关系。
  18. 如权利要求17所述的方法,其中,所述基于所述移动操作、所述第一位置以及所述第二位置,确定所述移动操作在所述虚拟场景中对应的起点位置特征与终点位置特征,包括:
    确定所述第一位置在所述虚拟场景中所处的第一区域、所述第二位置在所述虚拟场景中所处的第二区域;
    确定所述第二位置在所述地图界面中对应的标记类型,其中,所述标记类型包括无标记、虚拟对象位置标记、虚拟载具位置标记;
    将所述第一区域作为所述移动操作的起点位置特征,将所述第二区域、以及所述标记类型作为所述移动操作的终点位置特征。
  19. 如权利要求18所述的方法,其中,所述确定所述第二位置在所述地图界面中对应的标记类型,包括:
    检测所述地图中以所述第二位置为中心的部分区域;
    当检测到至少一个所述位置标记控件时,将检测到的与所述第二位置最近的所述位置标记控件对应的标记类型,作为所述第二位置在所述地图界面中对应的标记类型;
    当未检测到所述位置标记控件时,将无标记作为所述第二位置在所述地图界面中对应的标记类型。
  20. 如权利要求18所述的方法,其中,所述方法还包括:
    响应于所述部分区域中出现至少一个虚拟载具,在所述地图中显示用于表征所述虚拟载具处于所述第二位置的位置标记控件,其中,所述虚拟载具的位置标记控件的标记类型为虚拟载具位置标记。
  21. 如权利要求1至4任一项所述的方法,其中,所述方法还包括:
    响应于针对所述地图中的第一标记控件的触发操作,显示进入地点标记模式,响应于针对所述地图的点击操作,在所述地图上的点击位置显示第一自定义标记点,其中,所述第一自定义标记点用于在所述第二虚拟对象对应的地图界面中同步显示;
    响应于针对所述地图中的第二标记控件的触发操作,在所述地图上所述第一虚拟对象当前所在的所述第一位置显示第二自定义标记点,其中,所述第二自定义标记点用于在所述第二虚拟对象对应的地图界面中同步显示。
  22. 一种虚拟场景中的消息处理装置,所述装置包括:
    显示模块,配置为在第一虚拟对象对应的地图界面中,显示虚拟场景的至少部分区域的地图;
    所述显示模块,还配置为响应于所述部分区域中出现至少一个第二虚拟对象,在所述地图中显示用于表征所述第二虚拟对象当前所在的第一位置的位置标记控件,其中,所述第二虚拟对象是与所述第一虚拟对象属于相同阵营的任意一个虚拟对象;
    消息发送模块,配置为响应于针对所述位置标记控件的移动操作,将所述位置标记控件从所述第一位置移动到第二位置,以及向所述第二虚拟对象发送消息,其中,所述消息用于指示所述第二虚拟对象到达所述第二位置并执行指令。
  23. 一种电子设备,所述电子设备包括:
    存储器,用于存储可执行指令;
    处理器,用于执行所述存储器中存储的可执行指令时,实现权利要求1至20任一项所述的虚拟场景中的消息处理方法。
  24. 一种计算机可读存储介质,存储有可执行指令,所述可执行指令被处理器执行时实现权利要求1至20任一项所述的虚拟场景中的消息处理方法。
  25. 一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时实现权利要求1至20任一项所述的虚拟场景中的消息处理方法。
PCT/CN2023/083259 2022-05-23 2023-03-23 虚拟场景中的消息处理方法、装置、电子设备及计算机可读存储介质及计算机程序产品 WO2023226569A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210563612.6A CN117138357A (zh) 2022-05-23 2022-05-23 虚拟场景中的消息处理方法、装置、电子设备及存储介质
CN202210563612.6 2022-05-23

Publications (2)

Publication Number Publication Date
WO2023226569A1 true WO2023226569A1 (zh) 2023-11-30
WO2023226569A9 WO2023226569A9 (zh) 2024-04-11

Family

ID=88885418

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/083259 WO2023226569A1 (zh) 2022-05-23 2023-03-23 虚拟场景中的消息处理方法、装置、电子设备及计算机可读存储介质及计算机程序产品

Country Status (2)

Country Link
CN (1) CN117138357A (zh)
WO (1) WO2023226569A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190102943A1 (en) * 2017-09-29 2019-04-04 Apple Inc. Cooperative augmented reality map interface
US10616567B1 (en) * 2018-09-21 2020-04-07 Tanzle, Inc. Frustum change in projection stereo rendering
CN112569600A (zh) * 2020-12-23 2021-03-30 腾讯科技(深圳)有限公司 虚拟场景中的路径信息发送方法、计算机设备及存储介质
CN113101634A (zh) * 2021-04-19 2021-07-13 网易(杭州)网络有限公司 一种虚拟地图显示方法、装置、电子设备及存储介质
CN113198178A (zh) * 2021-06-03 2021-08-03 腾讯科技(深圳)有限公司 虚拟对象的位置提示方法、装置、终端及存储介质
CN113398601A (zh) * 2021-06-25 2021-09-17 网易(杭州)网络有限公司 信息发送方法、信息发送装置、计算机可读介质及设备
CN113813603A (zh) * 2021-09-29 2021-12-21 网易(杭州)网络有限公司 一种游戏的显示控制方法、装置、电子设备及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190102943A1 (en) * 2017-09-29 2019-04-04 Apple Inc. Cooperative augmented reality map interface
US10616567B1 (en) * 2018-09-21 2020-04-07 Tanzle, Inc. Frustum change in projection stereo rendering
CN112569600A (zh) * 2020-12-23 2021-03-30 腾讯科技(深圳)有限公司 虚拟场景中的路径信息发送方法、计算机设备及存储介质
CN113101634A (zh) * 2021-04-19 2021-07-13 网易(杭州)网络有限公司 一种虚拟地图显示方法、装置、电子设备及存储介质
CN113198178A (zh) * 2021-06-03 2021-08-03 腾讯科技(深圳)有限公司 虚拟对象的位置提示方法、装置、终端及存储介质
CN113398601A (zh) * 2021-06-25 2021-09-17 网易(杭州)网络有限公司 信息发送方法、信息发送装置、计算机可读介质及设备
CN113813603A (zh) * 2021-09-29 2021-12-21 网易(杭州)网络有限公司 一种游戏的显示控制方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
WO2023226569A9 (zh) 2024-04-11
CN117138357A (zh) 2023-12-01

Similar Documents

Publication Publication Date Title
US10709982B2 (en) Information processing method, apparatus and non-transitory storage medium
CN108465238B (zh) 游戏中的信息处理方法、电子设备及存储介质
CN113101652A (zh) 信息展示方法、装置、计算机设备及存储介质
WO2022142626A1 (zh) 虚拟场景的适配显示方法、装置、电子设备、存储介质及计算机程序产品
CN111760274A (zh) 技能的控制方法、装置、存储介质及计算机设备
US20220297004A1 (en) Method and apparatus for controlling virtual object, device, storage medium, and program product
WO2023138192A1 (zh) 控制虚拟对象拾取虚拟道具的方法、终端及存储介质
US20230330536A1 (en) Object control method and apparatus for virtual scene, electronic device, computer program product, and computer-readable storage medium
CN113426124A (zh) 游戏中的显示控制方法、装置、存储介质及计算机设备
CN115040873A (zh) 一种游戏分组处理方法、装置、计算机设备及存储介质
CN114344906A (zh) 虚拟场景中伙伴对象的控制方法、装置、设备及存储介质
WO2023226569A1 (zh) 虚拟场景中的消息处理方法、装置、电子设备及计算机可读存储介质及计算机程序产品
WO2023066003A1 (zh) 虚拟对象的控制方法、装置、终端、存储介质及程序产品
CN116531758A (zh) 虚拟角色的控制方法、装置、存储介质以及电子装置
KR20240026256A (ko) 프롬프트 정보의 디스플레이 방법, 장치와 저장 매체 및 전자 기기
CN113018862B (zh) 虚拟对象的控制方法、装置、电子设备及存储介质
CN115999153A (zh) 虚拟角色的控制方法、装置、存储介质及终端设备
CN115193043A (zh) 一种游戏信息发送方法、装置、计算机设备及存储介质
CN115120979A (zh) 虚拟对象的显示控制方法、装置、存储介质和电子装置
CN113926187A (zh) 虚拟场景中的对象控制方法、装置及终端设备
CN113867873A (zh) 页面显示方法、装置、计算机设备及存储介质
CN113426115A (zh) 游戏角色的展示方法、装置和终端
WO2024060924A1 (zh) 虚拟场景的互动处理方法、装置、电子设备及存储介质
WO2024060888A1 (zh) 虚拟场景的交互处理方法、装置、电子设备、计算机可读存储介质及计算机程序产品
WO2024021792A1 (zh) 虚拟场景的信息处理方法、装置、设备、存储介质及程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23810644

Country of ref document: EP

Kind code of ref document: A1