CN114210061A - Map interaction processing method, device, equipment and storage medium in virtual scene - Google Patents

Map interaction processing method, device, equipment and storage medium in virtual scene Download PDF

Info

Publication number
CN114210061A
CN114210061A CN202111530787.9A CN202111530787A CN114210061A CN 114210061 A CN114210061 A CN 114210061A CN 202111530787 A CN202111530787 A CN 202111530787A CN 114210061 A CN114210061 A CN 114210061A
Authority
CN
China
Prior art keywords
virtual
map
area
virtual scene
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111530787.9A
Other languages
Chinese (zh)
Inventor
刘智洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111530787.9A priority Critical patent/CN114210061A/en
Publication of CN114210061A publication Critical patent/CN114210061A/en
Pending legal-status Critical Current

Links

Abstract

The application provides a map interaction processing method, a map interaction processing device, electronic equipment, a computer readable storage medium and a computer program product in a virtual scene; the method comprises the following steps: displaying a virtual scene; in response to a deployment triggering operation of a first virtual prop equipped for a first virtual object, displaying a map interface of a virtual scene, and displaying a map of the virtual scene in the map interface, wherein the transparency of an infeasible area in the map interface is different from the transparency of a feasible area, and the infeasible area is an area in which any virtual object cannot act; in response to a location selection operation implemented on the map interface, determining a target location selected by the location selection operation in the map interface; and responding to the target position located in the feasible region, and displaying the deployed first virtual prop at a position corresponding to the target position in the virtual scene. Through the application, the efficiency of human-computer interaction in the virtual scene can be improved.

Description

Map interaction processing method, device, equipment and storage medium in virtual scene
Technical Field
The present application relates to computer human-computer interaction technologies, and in particular, to a method and an apparatus for processing map interaction in a virtual scene, an electronic device, a computer-readable storage medium, and a computer program product.
Background
The display technology based on the graphic processing hardware expands the perception environment and the channel for acquiring information, particularly the display technology of the virtual scene, can realize diversified interaction between virtual objects controlled by users or artificial intelligence according to the actual application requirements, has various typical application scenes, and can simulate the real fighting process between the virtual objects in the virtual scene of games and the like.
With the popularization of information technology, electronic devices can implement more rich and vivid virtual scenes, typically games, for example. More and more users participate in the interaction of the virtual scene through the electronic device, for example, rapidly participate in the interaction process of the virtual scene through a map in a game, for example, a virtual prop in the virtual scene is deployed based on a feasible region in a map interface.
However, the support of the relevant technology for the feasible region in the map interface is complicated, for example, whether the abscissa and the ordinate of the pixel point in the map interface are in the coordinate range of the feasible region needs to be judged, which affects the efficiency of human-computer interaction in the virtual scene, and further affects the use experience.
Disclosure of Invention
The embodiment of the application provides a map interaction processing method and device in a virtual scene, electronic equipment, a computer readable storage medium and a computer program product, which can improve the efficiency of human-computer interaction in the virtual scene.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a map interaction processing method in a virtual scene, which comprises the following steps:
displaying a virtual scene, wherein the virtual scene comprises a plurality of virtual objects;
responding to the deployment triggering operation of the first virtual prop equipped for the first virtual object, displaying a map interface of the virtual scene, and
displaying a map of the virtual scene in the map interface, wherein a transparency of an infeasible area in the map interface is distinct from a transparency of a feasible area, the infeasible area being an area in which any of the virtual objects cannot act;
in response to a location selection operation implemented at the map interface, determining a target location selected in the map interface by the location selection operation;
in response to the target location being located in the feasible region, displaying the deployed first virtual prop in the virtual scene at a location corresponding to the target location.
An embodiment of the present application provides a map interaction processing apparatus in a virtual scene, including:
the system comprises a first display module, a second display module and a display module, wherein the first display module is used for displaying a virtual scene, and the virtual scene comprises a plurality of virtual objects;
the second display module is used for responding to deployment triggering operation of a first virtual prop equipped for a first virtual object, displaying a map interface of the virtual scene, and displaying a map of the virtual scene in the map interface, wherein the transparency of an infeasible area in the map interface is different from that of a feasible area, and the infeasible area is an area in which any virtual object cannot act;
the determining module is used for responding to the position selection operation implemented on the map interface and determining the target position selected by the position selection operation in the map interface;
and a third display module, configured to display the deployed first virtual prop in a position in the virtual scene corresponding to the target position in response to the target position being located in the feasible region.
In the above technical solution, the second display module is further configured to control the first virtual object to open a display device;
displaying a map interface of the virtual scene in a screen of the display device;
the determination module is further used for responding to the position selection operation which is performed by controlling the first virtual object in the screen aiming at the map interface, and determining the target position which is selected by the position selection operation in the map interface displayed in the screen.
In the above technical solution, the pixel points of the virtual scene located in the infeasible area are applied with uniform transparency; before responding to the target position being located in the feasible region, the third display module is further used for determining an area which is within the outline of the map interface and outside the outline of the map as the infeasible region and determining the map as the feasible region;
and determining that the target position is located in the feasible region in response to the fact that the transparency of the corresponding pixel point of the target position in the map interface is different from the transparency of the infeasible region.
In the foregoing technical solution, before responding to that the target location is located in the feasible region, the third display module is further configured to determine an area inside an outline of the map interface and outside the outline of the map as the infeasible region, and
determining as the infeasible area an area of the map that is unavailable for movement of any of the virtual objects therein;
determining an area of the map available for movement of any of the virtual objects therein as the feasible area;
and determining that the target position is located in the feasible region in response to the fact that the transparency of the corresponding pixel point of the target position in the map interface is different from the transparency of the infeasible region.
In the above technical solution, the third display module is further configured to determine an inactive area set for any one of the virtual objects in the virtual scene;
wherein the inactive area comprises at least one of: a plane corresponding to a space which can not be covered by any virtual object, and a top surface corresponding to a virtual object which can not be reached by any virtual object;
and determining the area corresponding to the infeasible area in the map as the infeasible area.
In the foregoing technical solution, before responding that the target location is located in the feasible region, the third display module is further configured to determine an area inside an outline of the map interface and outside the outline of the map as an infeasible region, and
determining an area of the map where deployment of the first virtual prop is limited as an infeasible area;
determining an area of the map where deployment of the first virtual prop is not restricted as the feasible area;
and determining that the target position is located in the feasible region in response to the fact that the transparency of the corresponding pixel point of the target position in the map interface is different from the transparency of the infeasible region.
In the above technical solution, the third display module is further configured to determine an attribute of the first virtual item;
determining factors that limit deployment of the first virtual item in the virtual scene based on the attribute of the first virtual item;
wherein the factors include at least one of: environmental factors conflicting with the deployment environment required by the first virtual prop and environmental factors limiting the release of the skill of the first virtual prop;
and determining the area corresponding to the factors in the map as the infeasible area.
In the above technical solution, the contour of the map interface is a regular geometric shape, and the contour of the map is a regular or irregular geometric shape.
In the above technical solution, the second display module is further configured to display a plurality of candidate virtual items in a virtual item store, where the plurality of candidate virtual items include the first virtual item;
in response to an arming operation for the first virtual item, arming the first virtual item to a list of virtual items of the first virtual object;
and responding to the activation operation aiming at the first virtual prop in the virtual prop list, and displaying the activation state identification of the first virtual prop.
In the foregoing technical solution, the third display module is further configured to display a prompt message in response to that the target location is located in the infeasible area, where the prompt message indicates that the target location is invalid and a new target location needs to be selected.
In the above technical solution, before the deployed first virtual item is displayed at a position corresponding to the target position in the virtual scene, the third display module is further configured to determine a plurality of reference points of a map in the map interface, and determine a mapping point corresponding to the reference point in the virtual scene;
performing the following for any of the plurality of reference points: determining a candidate position corresponding to the target position in the virtual scene based on the vector of the target position to the reference point, the mapping point corresponding to the reference point and the mapping relation between the map and the virtual scene;
and averaging the candidate positions respectively determined based on the plurality of reference points to obtain a position corresponding to the target position in the virtual scene.
In the above technical solution, when the map of the virtual scene is displayed in the map interface, the first display module is further configured to determine positions of the plurality of virtual objects in the virtual scene;
mapping the positions of the virtual objects in the virtual scene to the map based on the mapping relation between the map and the virtual scene to obtain the display positions of the virtual objects in the map;
and displaying the position identifications of the plurality of virtual objects at the display position.
In the above technical solution, the first display module is further configured to display a plurality of sub-areas in the map of the virtual scene, where different sub-areas apply different transparencies;
wherein the transparency is related to at least one of the following indicators of the sub-region: the attack ability of the sub-area to an adversary, the concealment of the sub-area.
In the above technical solution, when the map of the virtual scene is displayed in the map interface, the second display module is further configured to display at least one candidate position for deploying the first virtual item in the feasible region;
wherein a transparency different from other locations in the feasible region, which represent locations in the feasible region other than the at least one candidate location, is applied at the at least one candidate location.
In the above technical solution, the second display module is further configured to obtain a plurality of historical positions for deploying the first virtual item;
determining at least one candidate location for deploying the first virtual prop based on the plurality of historical locations;
wherein the type of the candidate location comprises at least one of: the historical position with the strongest attack capability on the enemy; the historical location with the highest concealment; deploying the historical position with the highest frequency; aggregating the plurality of historical locations.
In the above technical solution, the second display module is further configured to obtain scene data of the virtual scene;
calling a position prediction model to perform position prediction processing based on the scene data of the virtual scene and the first virtual prop to obtain at least one candidate position for deploying the first virtual prop;
the position prediction model is obtained through training of historical scene data, deployed historical virtual props and position labels for deploying the historical virtual props.
In the above technical solution, the second display module is further configured to display a one-key automatic selection position function button;
in response to a triggering operation of the one-key automatic selection location function button, identifying the triggering operation as the location selection operation, and determining an optimal location of at least one candidate location for deploying the first virtual prop as the target location;
wherein the type of the optimal position comprises at least one of: candidate positions with the strongest attack ability on enemies; candidate positions with highest concealment; deploying the candidate position with the highest frequency; aggregating the at least one candidate location.
An embodiment of the present application provides an electronic device for map interaction processing, where the electronic device includes:
a memory for storing executable instructions;
and the processor is used for realizing the map interaction processing method in the virtual scene provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the executable instructions, so as to implement the map interaction processing method in the virtual scene provided by the embodiment of the application.
The embodiment of the present application provides a computer program product, which includes a computer program or an instruction, and is characterized in that the computer program or the instruction, when executed by a processor, implements the map interaction processing method in the virtual scene provided in the embodiment of the present application.
The embodiment of the application has the following beneficial effects:
the transparency of the feasible region and the transparency of the infeasible region in the map interface are distinguished, so that whether the target position is located in the feasible region or not can be quickly determined based on the transparency of the target position, and the virtual prop is deployed at the position corresponding to the target position in the virtual scene, so that the human-computer interaction efficiency in the virtual scene is improved.
Drawings
FIG. 1 is a rule map provided by the related art;
fig. 2A is a schematic application mode diagram of a map interaction processing method in a virtual scene according to an embodiment of the present application;
fig. 2B is a schematic application mode diagram of a map interaction processing method in a virtual scene according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of an electronic device for map interaction processing provided by an embodiment of the present application;
4A-4C are schematic flow diagrams of a map interaction processing method in a virtual scene according to an embodiment of the present application;
5A-5C are schematic diagrams of a map interface provided by an embodiment of the application;
FIG. 5D is a schematic illustration of feasible regions provided by embodiments of the present application;
6A-6B are irregular maps provided by embodiments of the present application;
FIG. 7 is a map interface provided by an embodiment of the present application;
FIG. 8 is a material interface diagram provided in accordance with an embodiment of the present application;
FIGS. 9A-9B are schematic diagrams of virtual screens provided by embodiments of the present application;
fig. 10 is a flowchart illustrating a map interaction processing method in a virtual scene according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a large map provided by an embodiment of the present application;
fig. 12 is a schematic diagram of a minimap provided in an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, references to the terms "first", "second", and the like are only used for distinguishing similar objects and do not denote a particular order or importance, but rather the terms "first", "second", and the like may be used interchangeably with the order of priority or the order in which they are expressed, where permissible, to enable embodiments of the present application described herein to be practiced otherwise than as specifically illustrated and described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) In response to: for indicating the condition or state on which the performed operation depends, when the condition or state on which the performed operation depends is satisfied, the performed operation or operations may be in real time or may have a set delay; there is no restriction on the order of execution of the operations performed unless otherwise specified.
2) A client: and the terminal is used for running application programs for providing various services, such as a video playing client, a game client and the like.
3) Virtual scene: the application program displays (or provides) a virtual scene when running on the terminal. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, ocean, etc., the land may include environmental elements such as deserts, cities, etc., and a user may control a virtual object to move in the virtual scene.
4) Virtual object: the image of various people and objects that can interact in the virtual scene, or the movable objects in the virtual scene. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be a virtual character in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
Alternatively, the virtual object may be a user Character controlled by an operation on the client, an Artificial Intelligence (AI) set in the virtual scene fight by training, or a Non-user Character (NPC) set in the virtual scene interaction. Alternatively, the virtual object may be a virtual character that is confrontationally interacted with in a virtual scene. Optionally, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in the interaction. Taking a shooting game as an example, the user may control a virtual object to freely fall, glide or open a parachute to fall in the sky of the virtual scene, run, jump, crawl, bow to move on land, or control a virtual object to swim, float or dive in the sea, or the like, and of course, the user may also control a virtual object to move in the virtual scene by riding a virtual vehicle, for example, the virtual vehicle may be a virtual car, a virtual aircraft, a virtual yacht, or the like, which is only exemplified by the above-mentioned scenes, but the present invention is not limited thereto. The user can also control the virtual object to carry out antagonistic interaction with other virtual objects through the virtual prop, for example, the virtual prop can be a throwing type virtual prop such as a grenade, a beaming grenade and a viscous grenade, can also be a shooting type virtual prop (namely a virtual shooting prop) such as a machine gun, a pistol and a rifle, and can also be a skill type virtual prop such as treatment, attack and the like.
5) And (3) an interaction process: a process in which a virtual object in a virtual scene develops according to the time of interaction or the state of interaction, for example, a process in which a virtual object battles in a game; the process of the engagement of the virtual objects in one scene of the game.
6) Scene data: scene data, representing various features that objects in the virtual scene are exposed to during the interaction, may include, for example, the location of the objects in the virtual scene. Of course, different types of features may be included depending on the type of virtual scene; for example, in a virtual scene of a game, scene data may include a time required to wait for various functions provided in the virtual scene (depending on the number of times the same function can be used within a certain time), and attribute values indicating various states of a game character, for example, a life value (also referred to as a red amount) and a magic value (also referred to as a blue amount), and the like.
7) Shooting game: including first person shooter games, third person shooter games, etc., including but not limited to all games that use hot weapons for remote attacks.
8) Feasible region: in the map interface of the virtual scene, an area in which the virtual character can move to travel (can be reached) is referred to as a feasible area, and an area in which the virtual character cannot move to travel (can be reached) is referred to as an infeasible area.
For example, when a virtual scene with an irregular outer boundary is displayed in the map interface, a background is filled outside the outer boundary of the virtual scene, so that the map as a whole is an image with a regular outer boundary, an area in the map interface that does not exceed the outer boundary of the virtual scene is an area where the virtual object can move, and is called a feasible area, and an area in the map interface that exceeds the outer boundary of the virtual scene is an infeasible area.
In the related art, only regular maps such as rectangles, circles, and the like can be processed, as shown in fig. 1, a rectangular map 12 is displayed in the map interface 11, the map 12 is a feasible region, and it is easy to determine whether a target position (for example, a position where a virtual character is located) is in the feasible region as long as a center position of the map 12 and a length and a width of the map 12 are obtained.
The feasible region determination method in the related art is only for a regular map, and the feasible region determination cannot be performed based on an irregular map.
In order to solve the above problem, embodiments of the present application provide a map interaction processing method and apparatus in a virtual scene, an electronic device, a computer-readable storage medium, and a computer program product, which can improve efficiency of human-computer interaction in the virtual scene. In order to facilitate easier understanding of the map interaction processing method in the virtual scene provided by the embodiment of the present application, an exemplary implementation scenario of the map interaction processing method in the virtual scene provided by the embodiment of the present application is first described.
In some embodiments, the virtual scene may be an environment for game characters to interact with, for example, game characters to play against in the virtual scene, and the two-way interaction may be performed in the virtual scene by controlling actions of the game characters, so that the user can relieve life stress during the game.
In an implementation scenario, referring to fig. 2A, fig. 2A is an application mode schematic diagram of the map interaction processing method in the virtual scenario provided in the embodiment of the present application, and is applicable to some application modes that can complete the calculation of the related data of the virtual scenario 100 completely depending on the computing capability of the graphics processing hardware of the terminal 400, for example, a game in a single-machine/offline mode, and output of the virtual scenario is completed through various different types of terminals 400, such as a smart phone, a tablet computer, and a virtual reality/augmented reality device.
As an example, types of Graphics Processing hardware include a Central Processing Unit (CPU) and a Graphics Processing Unit (GPU).
When the visual perception of the virtual scene 100 is formed, the terminal 400 calculates and displays required data through the graphic computing hardware, completes the loading, analysis and rendering of the display data, and outputs a video frame capable of forming the visual perception on the virtual scene at the graphic output hardware, for example, a two-dimensional video frame is displayed on a display screen of a smart phone, or a video frame realizing a three-dimensional display effect is projected on a lens of an augmented reality/virtual reality glasses; in addition, the terminal 400 may also form one or more of auditory perception, tactile perception, motion perception, and taste perception by means of different hardware in order to enrich the perception effect.
As an example, the terminal 400 runs a client 410 (e.g. a standalone version of a game application), and outputs a virtual scene including role play during the running process of the client 410, wherein the virtual scene may be an environment for game role interaction, such as a plain, a street, a valley, and the like for game role battle; taking the example of displaying the virtual scene 100 from the first human-scale perspective, a first virtual object 101, a virtual prop 102 equipped with the first virtual object 101, and a map interface 103 are displayed in the virtual scene 100, where the first virtual object 101 may be a game character controlled by a user (or called player), that is, the first virtual object 101 is controlled by a real user, and will operate in the virtual scene in response to an operation of a real user on a button (including a rocker button, an attack button, a defense button, and the like), for example, when the real user moves the rocker button to the left, the first virtual object will move to the left in the virtual scene, and may also remain stationary in place, jump, and use various functions (such as skills and props); the virtual item 102 is controlled by the first virtual object 101, and is operated in a virtual scene in response to the operation of the first virtual object 101, so that game fight is realized by controlling the virtual item; the map interface 103 includes a map of the virtual scene 101; the first virtual object 101 may also be a Non-user Character (NPC) or the like in the virtual scene interaction.
For example, a first virtual object 101 in the virtual scene 100 is equipped with a virtual prop 102, a target position selected for deployment in the map interface 103 is determined for a deployment operation of the virtual prop in the virtual scene 100, when the target position is determined to be located in a feasible region in the map interface 103 (the transparency of the feasible region in the map interface is different from the transparency of an infeasible region) based on the transparency of the target position, the deployed virtual prop 102 is displayed at a position corresponding to the target position in the virtual scene 100, for example, the virtual prop 102 is a virtual airplane, the deployed virtual missile is displayed at a position corresponding to the target position in the virtual scene 100, so as to realize a function of calling out a virtual missile based on a map of the virtual scene, so that whether the target position is located in the feasible region can be quickly determined based on the transparency of the target position, so as to deploy the virtual prop at the position corresponding to the target position in the virtual scene, therefore, the efficiency of human-computer interaction in the virtual scene is improved.
In another implementation scenario, referring to fig. 2B, fig. 2B is a schematic diagram of an application mode of the map interaction processing method in the virtual scenario, which is applied to the terminal 400 and the server 200, and is adapted to complete virtual scenario calculation depending on the calculation capability of the server 200 and output the application mode of the virtual scenario at the terminal 400.
Taking the visual perception forming the virtual scene 100 as an example, the server 200 performs calculation of display data (e.g., scene data) related to the virtual scene and sends the calculated display data to the terminal 400 through the network 300, the terminal 400 relies on graphics computing hardware to complete loading, parsing and rendering of the calculated display data, and relies on graphics output hardware to output the virtual scene to form the visual perception, for example, a two-dimensional video frame may be presented on a display screen of a smart phone, or a video frame realizing a three-dimensional display effect may be projected on a lens of augmented reality/virtual reality glasses; for perception in the form of a virtual scene, it is understood that an auditory perception may be formed by means of corresponding hardware outputs of the terminal 400, for example using a microphone, a tactile perception using a vibrator, etc.
As an example, the terminal 400 runs a client 410 (e.g. a network version game application), and performs game interaction with other users by connecting the server 200 (e.g. a game server), the terminal 400 outputs the virtual scene 100 of the client 410, and displays the virtual scene 100 in a first person perspective, and displays a first virtual object 101, a virtual prop 102 equipped with the first virtual object 101, and a map interface 103 in the virtual scene 100, where the first virtual object 101 may be a game character controlled by the user (or a player), that is, the first virtual object 101 is controlled by a real user, and will operate in the virtual scene in response to the operation of a real user on a button (including a joystick button, an attack button, a defense button, and the like), for example, when the real user moves the joystick button to the left, the first virtual object will move to the left in the virtual scene, and can remain still in place, Jumping and using various functions (such as skills and props); the virtual item 102 is controlled by the first virtual object 101, and is operated in a virtual scene in response to the operation of the first virtual object 101, so that game fight is realized by controlling the virtual item; the map interface 103 includes a map of the virtual scene 101; the first virtual object 101 may also be a Non-user character (NPC) or the like in the virtual scene interaction.
For example, a first virtual object 101 in the virtual scene 100 is equipped with a virtual prop 102, a target position selected for deployment in the map interface 103 is determined for a deployment operation of the virtual prop in the virtual scene 100, when the target position is determined to be located in a feasible region in the map interface 103 (the transparency of the feasible region in the map interface is different from the transparency of an infeasible region) based on the transparency of the target position, the deployed virtual prop 102 is displayed at a position corresponding to the target position in the virtual scene 100, for example, the virtual prop 102 is a virtual airplane, the deployed virtual missile is displayed at a position corresponding to the target position in the virtual scene 100, so as to realize a function of calling out a virtual missile based on a map of the virtual scene, so that whether the target position is located in the feasible region can be quickly determined based on the transparency of the target position, so as to deploy the virtual prop at the position corresponding to the target position in the virtual scene, therefore, the efficiency of human-computer interaction in the virtual scene is improved.
In some embodiments, the terminal 400 may implement the map interaction processing method in the virtual scene provided by the embodiments of the present application by running a computer program, for example, the computer program may be a native program or a software module in an operating system; may be a Native APPlication (APP), i.e. a program that needs to be installed in an operating system to run, such as a battle game APP (i.e. the client 410 described above); or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; but also a game applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
Taking a computer program as an application program as an example, in actual implementation, the terminal 400 is installed and runs with an application program supporting a virtual scene. The application program may be any one of a First-Person Shooting game (FPS), a third-Person Shooting game, a virtual reality application program, a three-dimensional map program, or a multi-player gunfight type live game. The user uses the terminal 400 to operate virtual objects located in a virtual scene for activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing, building a virtual building. Illustratively, the virtual object may be a virtual character, such as a simulated character or an animated character, among others.
In some embodiments, the embodiments of the present application may also be implemented by means of Cloud Technology (Cloud Technology), which refers to a hosting Technology for unifying resources of hardware, software, network, and the like in a wide area network or a local area network to implement computation, storage, processing, and sharing of data.
The cloud technology is a general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied based on a cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources.
For example, the server 200 in fig. 2B may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present application is not limited thereto.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device for map interaction processing provided in an embodiment of the present application, and is described by taking the electronic device as a terminal 400 as an example, where the electronic device 400 shown in fig. 3 includes: at least one processor 420, memory 460, at least one network interface 430, and a user interface 440. The various components in the terminal 400 are coupled together by a bus system 450. It is understood that the bus system 450 is used to enable connected communication between these components. The bus system 450 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 450 in fig. 3.
The Processor 420 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 440 includes one or more output devices 441, including one or more speakers and/or one or more visual display screens, that enable the presentation of media content. The user interface 440 also includes one or more input devices 442 including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display screen, camera, other input buttons and controls.
The memory 460 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 460 may optionally include one or more storage devices physically located remote from processor 420.
The memory 460 may include volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 460 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 460 may be capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 461 comprising system programs for handling various basic system services and performing hardware related tasks, such as framework layer, core library layer, driver layer, etc., for implementing various basic services and handling hardware based tasks;
a network communication module 462 for reaching other computing devices via one or more (wired or wireless) network interfaces 430, exemplary network interfaces 430 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 463 for enabling presentation of information (e.g., user interfaces for operating peripherals and displaying content and information) via one or more output devices 441 (e.g., display screens, speakers, etc.) associated with user interface 440;
an input processing module 464 for detecting one or more user inputs or interactions from one of the one or more input devices 442 and translating the detected inputs or interactions.
In some embodiments, the map interaction processing device in the virtual scene provided by the embodiments of the present application may be implemented in software, and fig. 3 illustrates the map interaction processing device 465 in the virtual scene stored in the memory 460, which may be software in the form of programs and plug-ins, and includes the following software modules: the first display module 4651, the second display module 4652, the determination module 4653 and the third display module 4654 are logical and thus may be arbitrarily combined or further divided according to the functions implemented. It should be noted that, in fig. 3, all the above modules are shown once for convenience of expression, but should not be considered as the map interaction processing device 465 in the virtual scene excluding the implementation that may include only the first display module 4651, the second display module 4652, the determination module 4653 and the third display module 4654, and the functions of the respective modules will be explained below.
In other embodiments, the map interaction processing Device in the virtual scene provided in this Application may be implemented in a hardware manner, for example, the map interaction processing Device in the virtual scene provided in this Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the map interaction processing method in the virtual scene provided in this Application, for example, the processor in the form of the hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic elements.
The map interaction processing method in the virtual scene provided by the embodiment of the present application will be specifically described below with reference to the accompanying drawings. The map interaction processing method in the virtual scene provided by the embodiment of the present application may be executed by the terminal 400 in fig. 2A alone, or may be executed by the terminal 400 and the server 200 in fig. 2B in a cooperation manner.
Next, a description will be given taking an example in which the terminal 400 in fig. 2A alone performs the map interaction processing in the virtual scene provided in the embodiment of the present application. Referring to fig. 4A, fig. 4A is a schematic flowchart of a map interaction processing method in a virtual scene according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 4A.
It should be noted that the method shown in fig. 4A can be executed by various forms of computer programs running on the terminal 400, and is not limited to the client 410 described above, but may also be the operating system 461, software modules and scripts described above, so that the client should not be considered as limiting the embodiments of the present application.
In step 101, a virtual scene is displayed, wherein the virtual scene includes a plurality of virtual objects.
For example, at least 2 virtual objects are included in the virtual scene. Diversified interactions between virtual objects have various typical application scenarios, for example, in virtual scenarios against games and the like. Taking a confrontation game as an example, the confrontation between the virtual objects can be realized through skills, props and the like. The virtual objects in the embodiment of the present application may be game characters controlled by users (or players), wherein one user corresponds to at least one virtual object.
In step 102, in response to a deployment triggering operation of a first virtual prop equipped for a first virtual object, displaying a map interface of a virtual scene, and displaying a map of the virtual scene in the map interface, wherein the transparency of an infeasible area in the map interface is different from the transparency of a feasible area, and the infeasible area is an area in which any virtual object cannot act.
For example, as shown in fig. 5A, a first virtual item 501 equipped with a first virtual object is displayed in a virtual scene, when it is determined that the first virtual item 501 needs to be used in the virtual scene through a map of the virtual scene, a deployment trigger operation of the first virtual item is triggered, in response to the deployment trigger operation, a map interface 502 of the virtual scene is displayed, and a map 503 of the virtual scene is displayed in the map interface 502. Note that the expression form of the transparency in the virtual scene is a color, and the transparency is represented by an alpha value. The first virtual object is a virtual object associated with an account currently logged in the terminal. As shown in fig. 5A, the contour of the map interface is a regular geometric shape (a rectangle shown in fig. 5A), and the contour of the map is a regular geometric shape, but the contour of the map may also be an irregular geometric shape.
Referring to fig. 4B, fig. 4B is an alternative flowchart of a map interaction processing method in a virtual scene according to an embodiment of the present application, and fig. 4B shows that before responding to that the target location is located in the feasible region, fig. 4A further includes steps 105 to 106: the pixel points of the virtual scene in the infeasible area are applied to have uniform transparency; in step 105, determining an area inside the outline of the map interface and outside the outline of the map as an infeasible area, and determining the map as a feasible area; in step 106, in response to that the transparency of the pixel point corresponding to the target location in the map interface is different from the transparency of the infeasible area, it is determined that the target location is located in the feasible area.
It should be noted that the outline of the map in the embodiment of the present application may be not only an irregular geometric shape but also a regular geometric shape. As shown in fig. 5A, an area inside the outline of the map interface and outside the outline of the map is determined as an infeasible area, that is, the infeasible area applies uniform transparency, and the map is determined as a feasible area, that is, the feasible area applies transparency different from that of the infeasible area, to distinguish the feasible area from the infeasible area. After the feasible region and the infeasible region are distinguished through the transparency, when the transparency of a pixel point corresponding to the selected stable target position in the map interface is different from the transparency of the infeasible region, the target position can be determined to be located in the feasible region, so that whether the target position is located in the feasible region can be rapidly determined based on the transparency of the target position, the virtual prop is deployed at the position corresponding to the target position in the virtual scene, and the human-computer interaction efficiency in the virtual scene is improved.
Referring to fig. 4C, fig. 4C is an optional flowchart of a map interaction processing method in a virtual scene according to an embodiment of the present application, and fig. 4C shows that before responding to that the target location is located in the feasible region, fig. 4A further includes steps 107 to 109: in step 107, determining an area inside the outline of the map interface and outside the outline of the map as an infeasible area, and determining an area in the map that is unavailable for any virtual object to move therein as an infeasible area; in step 108, determining an area in the map, in which any virtual object can move, as a feasible area; in step 109, in response to that the transparency of the pixel point corresponding to the target location in the map interface is different from the transparency of the infeasible area, it is determined that the target location is located in the feasible area.
It should be noted that the outline of the map in the embodiment of the present application may be not only an irregular geometric shape but also a regular geometric shape. As shown in fig. 5B, an area inside the outline of the map interface and outside the outline of the map is determined as an infeasible area, an area 504 in the map 503 that is not available for any virtual object to move therein is determined as an infeasible area, that is, the infeasible area applies uniform transparency, and an area in the map 503 that is available for any virtual object to move therein is determined as a feasible area, that is, the feasible area applies transparency different from the infeasible area, to distinguish the feasible area from the infeasible area. After the feasible region and the infeasible region are distinguished through the transparency, when the transparency of the pixel point of the selected target position corresponding to the map interface is different from the transparency of the infeasible region, the target position can be determined to be located in the feasible region, so that whether the target position is located in the feasible region can be rapidly determined based on the transparency of the target position, the virtual prop is deployed at the position corresponding to the target position in the virtual scene, and the human-computer interaction efficiency in the virtual scene is improved. The accuracy of map interaction is improved by accurately dividing the area in which the virtual object can move into the infeasible area through judging whether the area can be moved.
In some embodiments, determining as infeasible areas an area of the map that is unavailable for movement of any virtual object therein comprises: determining an inactive area set for any virtual object in a virtual scene; wherein the inactive area includes at least one of: a plane corresponding to a space which can not be covered by any virtual object, and a top surface corresponding to a virtual object which can not be reached by any virtual object; and determining the area corresponding to the non-viable area in the map as the non-viable area.
For example, the plane corresponding to the space that any virtual object cannot accommodate includes a reference plane such as a virtual ground, a virtual water surface, and a virtual sky. And determining an inactive area set for any virtual object in the virtual scene, such as the ground corresponding to a space which the virtual object cannot accommodate and the top corresponding to a virtual building which cannot climb, by the game program. Correspondingly, the movable region set for any virtual object in the virtual scene may be a ground surface on which the virtual object can move, a top surface of a virtual building on which the virtual object can climb, a virtual mountain top, or the like. And determining the area corresponding to the non-movable area in the map as the non-feasible area through the mapping relation between the virtual scene and the map.
In some embodiments, prior to responding to the target location being located in the feasible region, determining a region within the outline of the map interface and outside the outline of the map as an infeasible region, and determining a region of the map that restricts deployment of the first virtual prop as an infeasible region; determining an area, in the map, where deployment of the first virtual prop is not limited, as a feasible area; and determining that the target position is located in the feasible region in response to the fact that the transparency of the corresponding pixel point of the target position in the map interface is different from the transparency of the infeasible region.
It should be noted that the outline of the map in the embodiment of the present application may be not only an irregular geometric shape but also a regular geometric shape. As shown in fig. 5C, an area inside the outline of the map interface and outside the outline of the map is determined as an infeasible area, an area 505 in the map 503, in which deployment of the first virtual prop is restricted, is determined as an infeasible area, that is, the infeasible area applies uniform transparency, and an area in the map 503, in which deployment of the first virtual prop is not restricted, is determined as a feasible area, that is, the feasible area applies transparency different from the infeasible area, so as to distinguish the feasible area from the infeasible area. After the feasible region and the infeasible region are distinguished through the transparency, when the transparency of the pixel point of the selected target position corresponding to the map interface is different from the transparency of the infeasible region, the target position can be determined to be located in the feasible region, so that whether the target position is located in the feasible region can be rapidly determined based on the transparency of the target position, the virtual prop is deployed at the position corresponding to the target position in the virtual scene, and the human-computer interaction efficiency in the virtual scene is improved. The accuracy of map interaction is improved by judging whether the area where the first virtual prop is deployed is limited to be accurately divided into the infeasible area.
In some embodiments, determining the area of the map that restricts deployment of the first virtual prop as an infeasible area comprises: determining an attribute of the first virtual item; determining factors limiting deployment of the first virtual prop in the virtual scene based on the attribute of the first virtual prop; wherein the factors include at least one of: environmental factors conflicting with the deployment environment required by the first virtual prop and environmental factors limiting the release of the skill of the first virtual prop; and determining the area of the map corresponding to the factors as an infeasible area.
For example, the corresponding infeasible area of the first virtual item inside the virtual scene is determined through the attribute of the first virtual item. For example, 1) there are environmental factors that limit deployment of the virtual props in the infeasible area (i.e., environmental factors that conflict with the deployment environment required by the first virtual prop), for example, if the first virtual prop is a virtual ship, the infeasible area in the virtual scene is a non-water area; the first virtual prop has a hidden requirement, and the infeasible area is an area without a proper shelter; 2) the infeasible area may limit an environmental factor of the release of the skill of the virtual prop (i.e., an environmental factor of the release of the skill of the first virtual prop), for example, the first virtual prop is a virtual air defense gun, and the infeasible area is an area where the view of the virtual sky is limited.
In some embodiments, prior to triggering an operation in response to deployment of a first virtual item equipped for a first virtual object, displaying a plurality of candidate virtual items in a virtual item store, wherein the plurality of candidate virtual items includes the first virtual item; in response to an arming operation for the first virtual item, arming the first virtual item to a list of virtual items of the first virtual object; and responding to the activation operation aiming at the first virtual prop in the virtual prop list, and displaying the activation state identification of the first virtual prop.
For example, the virtual item store has a plurality of candidate virtual items for selection, and when a first virtual item is selected in the virtual item store for equipping, the first virtual item is added to the virtual item list of the first virtual object, that is, the first virtual object includes at least one equipped virtual item. By activating the first virtual item, the activation status identifier of the first virtual item is displayed in real time, as shown in fig. 5C, when the first virtual item 501 is activated, the first virtual item is highlighted.
It should be noted that, in the embodiment of the present application, multiple virtual props equipped in the first virtual object may be activated at the same time, and the virtual prop to be deployed is selected through switching operation of the virtual props, so that a feasible region and an infeasible region corresponding to the virtual prop to be deployed are displayed.
In some embodiments, when displaying a map of a virtual scene in a map interface, determining a location of a plurality of virtual objects in the virtual scene; mapping the positions of the virtual objects in the virtual scene to the map based on the mapping relation between the map and the virtual scene to obtain the display positions of the virtual objects in the map; and displaying the position identifications of the plurality of virtual objects at the display position.
For example, the manner of mapping the positions of a plurality of virtual objects in a virtual scene to a map is as follows: determining a plurality of reference points (e.g., A, B, C shown in fig. 11) of the virtual scene, and determining mapping points (e.g., a1, B1, C1 shown in fig. 12) in the map corresponding to the reference points; performing the following processing for any one of the plurality of reference points: determining a candidate position corresponding to any virtual object in the map based on a vector from any virtual object to a reference point, a mapping point corresponding to the reference point and a mapping relation between the map and the virtual scene; averaging the candidate positions respectively determined based on the plurality of reference points to obtain the display position of any virtual object in the map, and displaying the position identifiers (for example, the position identifier 506 shown in fig. 5C) of the plurality of virtual objects at the display position.
In some embodiments, when displaying a map of the virtual scene in the map interface, displaying at least one candidate location in the feasible region for deploying the first virtual prop; wherein the transparency is applied at the at least one candidate location differently from other locations in the feasible region, the other locations representing locations in the feasible region other than the at least one candidate location.
For example, when a map of the virtual scene is displayed in the map interface, at least one candidate location in the feasible region for deploying the first virtual prop is automatically displayed, and the at least one candidate location is distinguished from other locations in the feasible region by applying a transparency (or otherwise prompting, such as displaying a prompt icon at the corresponding candidate location) different from other locations in the feasible region, so that the user can quickly and accurately select the location for deploying the first virtual prop.
In some embodiments, the candidate locations are determined as follows: obtaining a plurality of historical positions for deploying a first virtual item; determining at least one candidate location for deploying the first virtual prop based on the plurality of historical locations; wherein the type of candidate location comprises at least one of: the historical position with the strongest attack capability on the enemy; the historical location with the highest concealment; deploying the historical position with the highest frequency; aggregated locations for a plurality of historical locations.
For example, determining candidate positions according to previous use habits of the user, for example, selecting a historical position with high lethality to an enemy; selecting a historical position with high concealment; selecting a historical position with the highest previous use frequency of a user; and aggregating the historical positions used in the sampling time window, and taking the aggregated positions as candidate positions.
In some embodiments, the candidate locations are determined as follows: acquiring scene data of a virtual scene; calling a position prediction model to perform position prediction processing based on scene data of the virtual scene and the first virtual prop to obtain at least one candidate position for deploying the first virtual prop; the position prediction model is obtained through training of historical scene data, deployed historical virtual props and position labels for deploying the historical virtual props.
For example, candidate positions are obtained in an artificial intelligence manner, and at least one candidate position for deploying the first virtual item is obtained by calling a position prediction model to perform position prediction processing based on scene data of the current virtual scene (for example, distribution of virtual objects of both parties during battle, attributes (attack capability, moving speed, and the like) of the virtual objects, distribution of virtual items of both parties, and attributes (including various performance parameters, such as killing radius, success or failure conditions, and the like) of the virtual items and the first virtual item. The position prediction model is obtained through training of historical scene data (for example, the distribution of virtual objects of both parties in historical battles, the attributes (attack ability, moving speed and the like) of the virtual objects, the distribution of virtual props of both parties and the attributes (including various performance parameters, such as killing radius, and historical data of win-loss situation and the like) of the virtual props), deployed historical virtual props and position labels for deploying the historical virtual props.
In step 103, in response to the location selection operation performed in the map interface, a target location selected in the map interface by the location selection operation is determined.
As shown in fig. 5C, after the map interface 502 is displayed, the target location selected in the map interface 502 by the location selection operation is determined in response to the location selection operation performed in the map interface 502.
In some embodiments, the map interface displaying the virtual scene is as follows: controlling a first virtual object to open a display device; displaying a map interface of a virtual scene in a screen of a display device; in response to a location selection operation performed at the map interface, determining a target location selected by the location selection operation in the map interface, including: in response to a position selection operation performed by the first virtual object in the screen with respect to the map interface being controlled, a target position selected by the position selection operation in the map interface displayed in the screen is determined.
For example, the display device may be virtual reality glasses, a virtual laptop, a virtual tablet, and the like. As shown in fig. 9A, the user controls the first virtual object to turn on the display device and displays a map interface 901 of the virtual scene in the screen of the display device, and in response to controlling a position selection operation of the first virtual object performed in the screen with respect to the map interface 901, determines a target position 902 selected by the position selection operation in the map interface displayed in the screen.
In some embodiments, in response to displaying a plurality of sub-regions in a map of the virtual scene prior to a location selection operation performed at the map interface, different sub-regions apply different degrees of transparency; wherein the transparency is related to at least one of the following indicators of the sub-region: attack ability of the sub-area to an adversary, concealment of the sub-area.
It should be noted that the outline of the map in the embodiment of the present application may be not only an irregular geometric shape but also a regular geometric shape. As shown in fig. 5D, a plurality of sub-areas (e.g., sub-areas 504 and 507) are displayed in a map 503 of a virtual scene, different transparencies are applied to different sub-areas, and the attacking ability of the sub-areas against enemies and the concealment of the sub-areas are represented by the differences in the transparencies, so that the sub-areas in the map are accurately divided by the transparencies of the sub-areas, and the accuracy of map interaction is improved.
For example, different sub-regions may exist in a feasible region of the same virtual item, and on the basis that the transparency of the feasible region is different from that of an infeasible region, a differentiated transparency (or other display manner, such as displaying a prompt icon) may also be adopted, where the transparency is related to the following indexes of the sub-regions: 1) the attack ability of the subarea to the enemy (depending on the number of the enemies in the killing range of the subarea and the distance between the subarea and the enemy) can be direct killing (such as shot) of the virtual item or indirect killing (such as killing of the carried virtual object to the enemy when the virtual item is used as a carrier), so that the proper subarea can be quickly selected to attack through transparency; 2) the concealment of the sub-regions represents the difficulty of concealment of the sub-regions, so that the appropriate sub-regions can be quickly selected for concealment through transparency.
In some embodiments, a one-touch auto-select location function button is displayed; in response to a trigger operation for automatically selecting a position function button for one key, identifying the trigger operation as a position selection operation, and determining an optimal position of at least one candidate position for deploying the first virtual prop as a target position; wherein the type of the optimal position comprises at least one of: candidate positions with the strongest attack ability on enemies; candidate positions with highest concealment; deploying the candidate position with the highest frequency; aggregated locations for at least one candidate location.
For example, a user may use the "one-touch auto-select" function, and the game program will automatically select the optimal location to deploy the first virtual prop in the feasible area. The mode of the automatically selected optimal position is as follows: selecting a candidate position with high killing power on an enemy; selecting a candidate position with high concealment; selecting a candidate position with the highest previous use frequency of the user; and aggregating the candidate positions used in the sampling time window, and taking the aggregated position as the optimal position.
In step 104, in response to the target location being in the feasible region, the deployed first virtual prop is displayed at a location in the virtual scene corresponding to the target location.
For example, when the target location is located in the feasible region, it is indicated that map interaction can be performed based on the target location, and a first virtual prop can be deployed at a location in the virtual scene corresponding to the target location, so that the deployed first virtual prop is displayed at the location in the virtual scene corresponding to the target location, and a function corresponding to the first virtual prop is implemented in the virtual scene, for example, if the first virtual prop is a virtual bomb, the virtual bomb explodes at the location in the virtual scene corresponding to the target location.
In some embodiments, in response to the target location being in the infeasible area, a prompt is displayed, wherein the prompt indicates that the target location is invalid and a new target location needs to be selected.
For example, when the target position is determined to be located in the infeasible area, the outline of the feasible area is highlighted so that a new target position is selected inside the outline, and the first virtual prop is deployed at a position corresponding to the new target position in the virtual scene.
In some embodiments, before the deployed first virtual item is displayed at a position in the virtual scene corresponding to the target position, determining a plurality of reference points of a map in a map interface, and determining a mapping point in the virtual scene corresponding to the reference points; performing the following processing for any one of the plurality of reference points: determining a candidate position corresponding to the target position in the virtual scene based on the vector from the target position to the reference point, the mapping point corresponding to the reference point and the mapping relation between the map and the virtual scene; and averaging the candidate positions respectively determined based on the plurality of reference points to obtain a position corresponding to the target position in the virtual scene.
For example, the target location is mapped to the virtual scene as follows: determining a plurality of reference points (e.g., a1, B1, C1 shown in fig. 12) of the map, and determining mapping points (e.g., A, B, C shown in fig. 11) corresponding to the reference points in the virtual scene; performing the following processing for any one of the plurality of reference points: determining a candidate position corresponding to the target position in the virtual scene based on a vector from any virtual object to a reference point, a mapping point corresponding to the reference point and a mapping relation between the map and the virtual scene; and carrying out average processing on the candidate positions respectively determined based on the plurality of reference points to obtain a position corresponding to the target position in the virtual scene, and displaying the first virtual prop at the position corresponding to the target position in the virtual scene.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
The embodiment of the application can be applied to various virtual scenes, for example, in the virtual scene of a game and the like, and the real fighting process between virtual objects can be simulated.
The following description will take a virtual scene as an example of a game for confrontation:
in the confrontation game, the embodiment of the application introduces a plurality of virtual props with killing skills, and most of the virtual props depend on the interaction of a map. The game program requires that the player select a location in the map, and then the game program determines whether the player-selected location is within a feasible area (i.e., an area in which the game character can act), and when the player-selected location is determined to be within the feasible area, the selected location is taken as an interaction location for the virtual item (e.g., the virtual item is summoned from the selected location, or virtual items at other locations are summoned to the selected location). If the map is a regular figure, such as a square or a circle, the formula can be used for direct calculation (i.e. whether the abscissa and the ordinate are in the coordinate range of the feasible region is determined), but the maps in the shooting game are completely irregular, and the conventional mathematical formula cannot be used for calculation, so that an effective way for determining the feasible region in the map needs to be designed.
In this regard, the embodiments of the present application provide an implementation manner capable of quickly determining a feasible region, since all pictures have colors, and the colors are different transparencies (alpha values) within the game program, then as long as each map is designed, the alpha value of the unfeasible region is set to a fixed value, different from the feasible region, and other alpha values are possible, for example, as long as the alpha values of all the infeasible areas are set to 0 in the art design resource, therefore, when designing the game logic, only need to judge whether the alpha value of the selected position point is greater than 0, when the selected position point is greater than 0, the selected position points are located in the feasible area, the map with all irregular graphs can be simply solved through the scheme, and the range of the feasible area can be completely controlled by the planning art, and the game program only takes charge of realizing the corresponding function.
For a regular map, which is too simple, the more complex map will be liked by the player, and the more irregular the four sides of the more complex map will be, as shown in FIG. 6A, the boundary of the map 602 in the map interface 601 is an irregular figure. Such an irregular map cannot be calculated by simple logic as compared with a regular map, and as shown in fig. 6B, the boundary of the map 602 in the map interface 601 is an irregular map, and the map 602 has many concave and convex positions, and the topography of such a map is relatively complicated.
Since an irregular map is introduced, it is necessary to determine a feasible region, wherein the feasible region represents a region that can be reached by all game characters in the map, and the infeasible region represents a region that cannot be reached by a game character. As shown in fig. 7, no matter how complex the map 701 of the feasible region is, the map 701 is finally displayed in a regular picture 702 of a square or rectangle.
As shown in fig. 8, the embodiment of the present application may provide a material 801, where the material 801 may obtain each pixel point in a picture, extract an alpha value of any pixel point, and determine whether the pixel point is located in a feasible region according to the alpha value, so that the arts need to set all the alpha values of the infeasible region to 0 in an initial stage of making the picture.
In the confrontation game, some virtual items require a player and a map to perform certain interactive operations, for example, the player needs to call a virtual airplane, a virtual missile, and the like at a certain position of the map, for this reason, as shown in fig. 9A, the player calls a virtual screen 901 capable of interacting with the map for a game character, when the player clicks an area 902 in the virtual screen 901, since the area 902 is not in an available area, the call function will not be in effect, that is, the virtual airplane, the virtual missile, and the like are not called, and when the player clicks an area in a boundary 903 in the virtual screen 901, as shown in fig. 9B, a selected identifier is displayed at a selected position in the virtual screen 901, and the call function is implemented at a position corresponding to the selected identifier 904. Therefore, the technology does not need any configuration, can judge the map boundary very accurately, and can be applied to any irregular map.
Referring to fig. 10, fig. 10 is a schematic flowchart of a map interaction processing method in a virtual scene according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 10:
step 1, equipping the skill virtual prop.
And 2, determining whether the skill virtual prop is activated. When the skill virtual prop is activated, executing step 3; and when the skill virtual prop is not activated, continuously equipping the skill virtual prop.
And 3, highlighting an icon corresponding to the skill virtual prop.
And 4, determining whether the skill virtual prop is used. When the skill virtual item is confirmed to be used, executing the step 5; and when the skill virtual prop is confirmed not to be used, the icon corresponding to the skill virtual prop is continuously highlighted.
And 5, switching out the virtual notebook computer, and displaying the map in the virtual notebook computer.
And step 6, confirming whether the map picture is clicked or not. When the map picture is confirmed to be clicked, executing the step 7; when the non-clicked map drawing is confirmed, the map is continuously displayed.
And 7, acquiring the alpha value of the pixel point corresponding to the click position in the map according to the click position.
And 8, determining whether the alpha value of the pixel point at the click position is greater than 0. When the alpha value of the pixel point of the click position is greater than 0, executing the step 9; and when the alpha value of the pixel point at the click position is less than or equal to 0, executing the step 10.
And 9, determining that the click position is located in the feasible region, and displaying a selected identifier at the click position of the player.
And step 10, determining that the click position is located in an infeasible area.
The following describes the mapping principle between a map (also called a small map) and a virtual scene (also called a large map):
firstly, a mapping relation between the positions of the game characters in the large map and the positions of the small map needs to be established, and the principle of the mapping relation is that three point reference points (A, B, C shown in fig. 11) are selected from the large map.
Similarly, as shown in fig. 12, mapping points a1, B1, and C1 corresponding to A, B, C three points in the large map are acquired in the small map.
Then, the distances and directions of the game character in the ground map and A, B, C in the large map are calculated respectively, so that AP, BP, and CP vectors (P represents the corresponding node of the game character in the large map, AP represents the distance and direction between the point A in the large map and the game character, BP represents the distance and direction between the point B in the ground map and the game character, and CP represents the distance and direction between the point C in the large map and the game character) are obtained, then based on the mapping relationship between the large map and the small map and the AP, BP, and CP, A1P1, B1P2, and C1P3(P1 represents the first mapping point corresponding to the P point in the small map calculated based on the AP, P2 represents the second mapping point corresponding to the P point in the small map calculated based on the BP, P3 represents the third mapping point corresponding to the P point in the small map calculated based on the CP) are obtained, and then, taking an average value P0 of the three points P1, P2 and P3, wherein P0 is the position of the game character displayed in the small map. And similarly, the position of the selected point displayed on the large map can be reversely deduced according to the selected point on the small map.
In summary, the map interaction method for the virtual scene provided by the embodiment of the application can add various irregular map graphs in the virtual scene, and add a method for judging the irregular graphs and an application example for the irregular graphs.
The map interaction processing method in the virtual scene provided by the embodiment of the present application has been described with reference to the exemplary application and implementation of the terminal provided by the embodiment of the present application, and the following continues to describe the map interaction processing scheme in the virtual scene implemented by the cooperation of the modules in the map interaction processing device 465 in the virtual scene provided by the embodiment of the present application.
A first display module 4651, configured to display a virtual scene, where the virtual scene includes a plurality of virtual objects; a second display module 4652, configured to, in response to a deployment trigger operation of a first virtual prop equipped for a first virtual object, display a map interface of the virtual scene, and display a map of the virtual scene in the map interface, where a transparency of an infeasible area in the map interface is different from a transparency of a feasible area, and the infeasible area is an area in which any virtual object cannot act; a determination module 4653, configured to determine, in response to a location selection operation performed on the map interface, a target location selected by the location selection operation in the map interface; a third display module 4654, configured to display the deployed first virtual prop in the virtual scene at a location corresponding to the target location in response to the target location being located in the feasible region.
In some embodiments, the second display module 4652 is further configured to control the first virtual object to open a display device; displaying a map interface of the virtual scene in a screen of the display device; the determining module 4653 is further configured to determine a target location selected by the location selection operation in the map interface displayed in the screen in response to controlling the location selection operation of the first virtual object in the screen with respect to the map interface.
In some embodiments, the application of the pixel points of the virtual scene in the infeasible area has uniform transparency; before responding to the target location being within the feasible region, the third display module 4654 is further configured to determine an area within an outline of the map interface and outside the outline of the map as the infeasible region, and determine the map as the feasible region; and determining that the target position is located in the feasible region in response to the fact that the transparency of the corresponding pixel point of the target position in the map interface is different from the transparency of the infeasible region.
In some embodiments, prior to responding to the target location being located within the feasible region, the third display module 4654 is further configured to determine an area within an outline of the map interface and outside the outline of the map as the infeasible region and an area of the map that is unavailable for movement therein of any of the virtual objects as the infeasible region; determining an area of the map available for movement of any of the virtual objects therein as the feasible area; and determining that the target position is located in the feasible region in response to the fact that the transparency of the corresponding pixel point of the target position in the map interface is different from the transparency of the infeasible region.
In some embodiments, the third display module 4654 is further configured to determine an inactive area set in the virtual scene for any of the virtual objects; wherein the inactive area comprises at least one of: a plane corresponding to a space which can not be covered by any virtual object, and a top surface corresponding to a virtual object which can not be reached by any virtual object; and determining the area corresponding to the infeasible area in the map as the infeasible area.
In some embodiments, prior to responding to the target location being located in the feasible region, the third display module 4654 is further configured to determine an area within the outline of the map interface and outside the outline of the map as an infeasible region and determine an area of the map where deployment of the first virtual prop is restricted as an infeasible region; determining an area of the map where deployment of the first virtual prop is not restricted as the feasible area; and determining that the target position is located in the feasible region in response to the fact that the transparency of the corresponding pixel point of the target position in the map interface is different from the transparency of the infeasible region.
In some embodiments, the third display module 4654 is also used to determine an attribute of the first virtual prop; determining factors that limit deployment of the first virtual item in the virtual scene based on the attribute of the first virtual item; wherein the factors include at least one of: environmental factors conflicting with the deployment environment required by the first virtual prop and environmental factors limiting the release of the skill of the first virtual prop; and determining the area corresponding to the factors in the map as the infeasible area.
In some embodiments, the contour of the map interface is a regular geometric shape and the contour of the map is a regular or irregular geometric shape.
In some embodiments, the second display module 4652 is further configured to display a plurality of candidate virtual items in a virtual item store, wherein the plurality of candidate virtual items includes the first virtual item; in response to an arming operation for the first virtual item, arming the first virtual item to a list of virtual items of the first virtual object; and responding to the activation operation aiming at the first virtual prop in the virtual prop list, and displaying the activation state identification of the first virtual prop.
In some embodiments, the third display module 4654 is further configured to display a prompt in response to the target location being in the infeasible area, wherein the prompt indicates that the target location is invalid and a new target location needs to be selected.
In some embodiments, before the deployed first virtual item is displayed at a location in the virtual scene corresponding to the target location, the third display module 4654 is further configured to determine a plurality of reference points of a map in the map interface and determine a mapping point in the virtual scene corresponding to the reference points; performing the following for any of the plurality of reference points: determining a candidate position corresponding to the target position in the virtual scene based on the vector of the target position to the reference point, the mapping point corresponding to the reference point and the mapping relation between the map and the virtual scene; and averaging the candidate positions respectively determined based on the plurality of reference points to obtain a position corresponding to the target position in the virtual scene.
In some embodiments, when displaying the map of the virtual scene in the map interface, the first display module 4651 is further operable to determine the location of the plurality of virtual objects in the virtual scene; mapping the positions of the virtual objects in the virtual scene to the map based on the mapping relation between the map and the virtual scene to obtain the display positions of the virtual objects in the map; and displaying the position identifications of the plurality of virtual objects at the display position.
In some embodiments, the first display module 4651 is further configured to display a plurality of sub-regions in a map of the virtual scene, different sub-regions applying different degrees of transparency; wherein the transparency is related to at least one of the following indicators of the sub-region: the attack ability of the sub-area to an adversary, the concealment of the sub-area.
In some embodiments, when displaying a map of the virtual scene in the map interface, the second display module 4652 is further for displaying at least one candidate location in the feasible region for deploying the first virtual prop; wherein a transparency different from other locations in the feasible region, which represent locations in the feasible region other than the at least one candidate location, is applied at the at least one candidate location.
In some embodiments, the second display module 4652 is further configured to obtain a plurality of historical locations for deploying the first virtual prop; determining at least one candidate location for deploying the first virtual prop based on the plurality of historical locations; wherein the type of the candidate location comprises at least one of: the historical position with the strongest attack capability on the enemy; the historical location with the highest concealment; deploying the historical position with the highest frequency; aggregating the plurality of historical locations.
In some embodiments, the second display module 4652 is further configured to obtain scene data of the virtual scene; calling a position prediction model to perform position prediction processing based on the scene data of the virtual scene and the first virtual prop to obtain at least one candidate position for deploying the first virtual prop; the position prediction model is obtained through training of historical scene data, deployed historical virtual props and position labels for deploying the historical virtual props.
In some embodiments, the second display module 4652 is further configured to display a one-touch auto-select position function button; in response to a triggering operation of the one-key automatic selection location function button, identifying the triggering operation as the location selection operation, and determining an optimal location of at least one candidate location for deploying the first virtual prop as the target location; wherein the type of the optimal position comprises at least one of: candidate positions with the strongest attack ability on enemies; candidate positions with highest concealment; deploying the candidate position with the highest frequency; aggregating the at least one candidate location.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the electronic device executes the map interaction processing method of the virtual scene in the embodiment of the application.
The embodiment of the present application provides a computer-readable storage medium storing executable instructions, where the executable instructions are stored, and when being executed by a processor, the executable instructions will cause the processor to execute a map interaction processing method of a virtual scene provided in the embodiment of the present application, for example, the map interaction processing method of the virtual scene shown in fig. 4A and 4C.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
It is understood that, in the embodiments of the present application, the data related to the historical interaction records of the user information and the like needs to obtain user permission or consent when the embodiments of the present application are applied to specific products or technologies, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related countries and regions.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (21)

1. A map interaction processing method in a virtual scene is characterized by comprising the following steps:
displaying a virtual scene, wherein the virtual scene comprises a plurality of virtual objects;
responding to the deployment triggering operation of the first virtual prop equipped for the first virtual object, displaying a map interface of the virtual scene, and
displaying a map of the virtual scene in the map interface, wherein a transparency of an infeasible area in the map interface is distinct from a transparency of a feasible area, the infeasible area being an area in which any of the virtual objects cannot act;
in response to a location selection operation implemented at the map interface, determining a target location selected in the map interface by the location selection operation;
in response to the target location being located in the feasible region, displaying the deployed first virtual prop in the virtual scene at a location corresponding to the target location.
2. The method of claim 1,
the map interface for displaying the virtual scene comprises:
controlling the first virtual object to open a display device;
displaying a map interface of the virtual scene in a screen of the display device;
the determining the target position selected by the position selection operation in the map interface in response to the position selection operation implemented in the map interface comprises:
in response to a position selection operation of the first virtual object in the screen for the map interface being controlled, determining a target position selected by the position selection operation in the map interface displayed in the screen.
3. The method of claim 1,
the pixel points of the virtual scene in the infeasible area are applied to have uniform transparency;
before responding to the target location being within the feasible region, the method further comprises:
determining an area within the outline of the map interface and outside the outline of the map as the infeasible area, and determining the map as the feasible area;
and determining that the target position is located in the feasible region in response to the fact that the transparency of the corresponding pixel point of the target position in the map interface is different from the transparency of the infeasible region.
4. The method of claim 1, wherein prior to responding to the target location being within the feasible region, the method further comprises:
determining an area within the outline of the map interface and outside the outline of the map as the infeasible area, an
Determining as the infeasible area an area of the map that is unavailable for movement of any of the virtual objects therein;
determining an area of the map available for movement of any of the virtual objects therein as the feasible area;
and determining that the target position is located in the feasible region in response to the fact that the transparency of the corresponding pixel point of the target position in the map interface is different from the transparency of the infeasible region.
5. The method of claim 4, wherein determining as the infeasible area an area of the map that is unavailable for movement of any of the virtual objects therein comprises:
determining an inactive area set for any virtual object in the virtual scene;
wherein the inactive area comprises at least one of: a plane corresponding to a space which can not be covered by any virtual object, and a top surface corresponding to a virtual object which can not be reached by any virtual object;
and determining the area corresponding to the infeasible area in the map as the infeasible area.
6. The method of claim 1, wherein prior to responding to the target location being within the feasible region, the method further comprises:
determining an area within the outline of the map interface and outside the outline of the map as an infeasible area, an
Determining an area of the map where deployment of the first virtual prop is limited as an infeasible area;
determining an area of the map where deployment of the first virtual prop is not restricted as the feasible area;
and determining that the target position is located in the feasible region in response to the fact that the transparency of the corresponding pixel point of the target position in the map interface is different from the transparency of the infeasible region.
7. The method of claim 6, wherein said determining an area of the map where deployment of the first virtual prop is restricted to be an infeasible area comprises:
determining an attribute of the first virtual item;
determining factors that limit deployment of the first virtual item in the virtual scene based on the attribute of the first virtual item;
wherein the factors include at least one of: environmental factors conflicting with the deployment environment required by the first virtual prop and environmental factors limiting the release of the skill of the first virtual prop;
and determining the area corresponding to the factors in the map as the infeasible area.
8. The method of claim 1,
the contour of the map interface is a regular geometric shape and the contour of the map is a regular or irregular geometric shape.
9. The method of claim 1, further comprising:
displaying a plurality of candidate virtual items in a virtual item store, wherein the plurality of candidate virtual items includes the first virtual item;
in response to an arming operation for the first virtual item, arming the first virtual item to a list of virtual items of the first virtual object;
and responding to the activation operation aiming at the first virtual prop in the virtual prop list, and displaying the activation state identification of the first virtual prop.
10. The method of claim 1, further comprising:
and responding to the target position located in the infeasible area, displaying prompt information, wherein the prompt information represents that the target position is invalid and a new target position needs to be selected.
11. The method of claim 1, wherein prior to displaying the deployed first virtual prop at a location in the virtual scene corresponding to the target location, the method further comprises:
determining a plurality of reference points of a map in the map interface, and determining mapping points corresponding to the reference points in the virtual scene;
performing the following for any of the plurality of reference points: determining a candidate position corresponding to the target position in the virtual scene based on the vector of the target position to the reference point, the mapping point corresponding to the reference point and the mapping relation between the map and the virtual scene;
and averaging the candidate positions respectively determined based on the plurality of reference points to obtain a position corresponding to the target position in the virtual scene.
12. The method of claim 1, wherein when displaying the map of the virtual scene in the map interface, the method further comprises:
determining locations of the plurality of virtual objects in the virtual scene;
mapping the positions of the virtual objects in the virtual scene to the map based on the mapping relation between the map and the virtual scene to obtain the display positions of the virtual objects in the map;
and displaying the position identifications of the plurality of virtual objects at the display position.
13. The method of claim 1, further comprising:
displaying a plurality of sub-regions in a map of the virtual scene, different sub-regions applying different transparencies;
wherein the transparency is related to at least one of the following indicators of the sub-region: the attack ability of the sub-area to an adversary, the concealment of the sub-area.
14. The method of claim 1, wherein when displaying the map of the virtual scene in the map interface, the method further comprises:
displaying at least one candidate location in the feasible region for deploying the first virtual prop;
wherein a transparency different from other locations in the feasible region, which represent locations in the feasible region other than the at least one candidate location, is applied at the at least one candidate location.
15. The method of claim 14, further comprising:
obtaining a plurality of historical positions for deploying the first virtual prop;
determining at least one candidate location for deploying the first virtual prop based on the plurality of historical locations;
wherein the type of the candidate location comprises at least one of: the historical position with the strongest attack capability on the enemy; the historical location with the highest concealment; deploying the historical position with the highest frequency; aggregating the plurality of historical locations.
16. The method of claim 14, further comprising:
acquiring scene data of the virtual scene;
calling a position prediction model to perform position prediction processing based on the scene data of the virtual scene and the first virtual prop to obtain at least one candidate position for deploying the first virtual prop;
the position prediction model is obtained through training of historical scene data, deployed historical virtual props and position labels for deploying the historical virtual props.
17. The method of claim 1, further comprising:
displaying a one-key automatic selection position function button;
in response to a triggering operation of the one-key automatic selection location function button, identifying the triggering operation as the location selection operation, and determining an optimal location of at least one candidate location for deploying the first virtual prop as the target location;
wherein the type of the optimal position comprises at least one of: candidate positions with the strongest attack ability on enemies; candidate positions with highest concealment; deploying the candidate position with the highest frequency; aggregating the at least one candidate location.
18. An apparatus for processing map interaction in a virtual scene, the apparatus comprising:
the system comprises a first display module, a second display module and a display module, wherein the first display module is used for displaying a virtual scene, and the virtual scene comprises a plurality of virtual objects;
the second display module is used for responding to deployment triggering operation of a first virtual prop equipped for a first virtual object, displaying a map interface of the virtual scene, and displaying a map of the virtual scene in the map interface, wherein the transparency of an infeasible area in the map interface is different from that of a feasible area, and the infeasible area is an area in which any virtual object cannot act;
the determining module is used for responding to the position selection operation implemented on the map interface and determining the target position selected by the position selection operation in the map interface;
and a third display module, configured to display the deployed first virtual prop in a position in the virtual scene corresponding to the target position in response to the target position being located in the feasible region.
19. An electronic device, characterized in that the electronic device comprises:
a memory for storing executable instructions;
a processor, configured to execute the executable instructions stored in the memory, and implement the map interaction processing method in the virtual scene according to any one of claims 1 to 17.
20. A computer-readable storage medium storing executable instructions for implementing the map interaction processing method in the virtual scene according to any one of claims 1 to 17 when executed by a processor.
21. A computer program product comprising a computer program or instructions, characterized in that the computer program or instructions, when executed by a processor, implement the map interaction processing method in a virtual scene of any of claims 1 to 17.
CN202111530787.9A 2021-12-14 2021-12-14 Map interaction processing method, device, equipment and storage medium in virtual scene Pending CN114210061A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111530787.9A CN114210061A (en) 2021-12-14 2021-12-14 Map interaction processing method, device, equipment and storage medium in virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111530787.9A CN114210061A (en) 2021-12-14 2021-12-14 Map interaction processing method, device, equipment and storage medium in virtual scene

Publications (1)

Publication Number Publication Date
CN114210061A true CN114210061A (en) 2022-03-22

Family

ID=80702132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111530787.9A Pending CN114210061A (en) 2021-12-14 2021-12-14 Map interaction processing method, device, equipment and storage medium in virtual scene

Country Status (1)

Country Link
CN (1) CN114210061A (en)

Similar Documents

Publication Publication Date Title
WO2022057529A1 (en) Information prompting method and apparatus in virtual scene, electronic device, and storage medium
WO2022105474A1 (en) State switching method and apparatus in virtual scene, device, medium, and program product
WO2022105552A1 (en) Information processing method and apparatus in virtual scene, and device, medium and program product
WO2022068418A1 (en) Method and apparatus for displaying information in virtual scene, and device and computer-readable storage medium
CN112402959A (en) Virtual object control method, device, equipment and computer readable storage medium
CN112416196A (en) Virtual object control method, device, equipment and computer readable storage medium
CN111921198A (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN112306351A (en) Virtual key position adjusting method, device, equipment and storage medium
CN114210061A (en) Map interaction processing method, device, equipment and storage medium in virtual scene
CN112121434B (en) Interaction method and device of special effect prop, electronic equipment and storage medium
CN112121432B (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN112057860A (en) Method, device, equipment and storage medium for activating operation control in virtual scene
CN112263834B (en) Method, device and equipment for controlling area in virtual scene and storage medium
CN112121414B (en) Tracking method and device in virtual scene, electronic equipment and storage medium
CN113101667A (en) Virtual object control method, device, equipment and computer readable storage medium
CN112295228B (en) Virtual object control method and device, electronic equipment and storage medium
CN112057863A (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN113144617A (en) Virtual object control method, device, equipment and computer readable storage medium
CN113769379A (en) Virtual object locking method, device, equipment, storage medium and program product
CN113633964A (en) Virtual skill control method, device, equipment and computer readable storage medium
CN112691377A (en) Control method and device of virtual role, electronic equipment and storage medium
CN112121432A (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN114146413A (en) Virtual object control method, device, equipment, storage medium and program product
CN112057864A (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN114146414A (en) Virtual skill control method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination