CN116474366A - Display control method, display control device, electronic apparatus, storage medium, and program product - Google Patents

Display control method, display control device, electronic apparatus, storage medium, and program product Download PDF

Info

Publication number
CN116474366A
CN116474366A CN202310357111.7A CN202310357111A CN116474366A CN 116474366 A CN116474366 A CN 116474366A CN 202310357111 A CN202310357111 A CN 202310357111A CN 116474366 A CN116474366 A CN 116474366A
Authority
CN
China
Prior art keywords
virtual
response area
virtual character
response
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310357111.7A
Other languages
Chinese (zh)
Inventor
陶欣怡
林�智
刘勇成
胡志鹏
袁思思
程龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310357111.7A priority Critical patent/CN116474366A/en
Publication of CN116474366A publication Critical patent/CN116474366A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a display control method, a display control device, an electronic device, a storage medium and a program product. And providing a graphical user interface through the terminal equipment, wherein at least part of virtual scenes are displayed in the graphical user interface, and the virtual scenes comprise first virtual roles controlled by the terminal equipment. The method comprises the following steps: providing an interaction component in a graphical user interface; responding to the first trigger instruction for the interaction component, acquiring interaction information corresponding to a second virtual role in a target virtual scene area currently displayed by the graphical user interface, and determining a response area corresponding to the second virtual role according to the interaction information; and displaying the response area. The response areas of other virtual roles can be determined among the virtual roles, attacks of the enemy virtual roles are avoided, whether the enemy attacks can be determined through the response area corresponding to each displayed virtual role in a multi-person mixed combat scene, the attack scope of the two enemy parties is distinguished, and the game experience of a user is improved.

Description

Display control method, display control device, electronic apparatus, storage medium, and program product
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a display control method, a display control device, an electronic device, a storage medium, and a program product.
Background
In the related art, when the response ranges of the virtual characters in the game scene are different, and the response range of the other party cannot be known between different virtual characters, especially in a scene of multi-person hybrid, the current virtual character cannot avoid the attack of the enemy virtual character because the response range of the release skills of other virtual characters cannot be known, so that the problem of poor user experience is caused.
Disclosure of Invention
In view of the foregoing, an object of the present application is to provide a display control method, apparatus, electronic device, storage medium, and program product.
In view of the above object, in a first aspect, the present application provides a display control method,
providing a graphical user interface through terminal equipment, wherein at least part of virtual scenes are displayed in the graphical user interface, and the virtual scenes comprise first virtual roles controlled by the terminal equipment; the method comprises the following steps:
providing an interaction component in the graphical user interface;
in response to receiving a first trigger instruction for the interaction component, acquiring interaction information corresponding to a second virtual role in a target virtual scene area currently displayed by the graphical user interface, and determining a response area corresponding to the second virtual role according to the interaction information;
And displaying the response area.
In a second aspect, the present application provides a display control apparatus,
providing a graphical user interface through terminal equipment, wherein at least part of virtual scenes are displayed in the graphical user interface, and the virtual scenes comprise first virtual roles controlled by the terminal equipment; the device comprises:
a first display module configured to provide an interactive component in the graphical user interface;
the determining module is configured to respond to receiving a first trigger instruction for the interaction component, acquire interaction information corresponding to a second virtual role in a target virtual scene area currently displayed by the graphical user interface, and determine a response area corresponding to the second virtual role according to the interaction information;
and a second display module configured to display the response area.
In a third aspect, the present application provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the display control method according to the first aspect when executing the program.
In a fourth aspect, the present application provides a computer-readable storage medium storing computer instructions for causing a computer to perform the display control method according to the first aspect.
In a fifth aspect, the present application provides a computer program product comprising computer program instructions which, when run on a computer, cause the computer to perform the display control method according to the first aspect.
As can be seen from the foregoing, the display control method, apparatus, electronic device, storage medium and program product provided in the present application provide a graphical user interface through a terminal device, where at least a part of a virtual scene is displayed in the graphical user interface, where the virtual scene includes a first virtual character controlled by the terminal device, and an interaction component may be provided in the graphical user interface. When a first trigger instruction for the interaction component is received, interaction information corresponding to a second virtual role in a target virtual scene area currently displayed by the graphical user interface can be obtained, a response area corresponding to the second virtual role is determined according to the interaction information, and further, the response area can be displayed. Therefore, the response areas of other virtual roles can be determined among different virtual roles, attacks of the enemy virtual roles are avoided, whether the enemy virtual roles can be attacked or not can be determined through the response area corresponding to each displayed virtual role in a multi-person mixed scene, the attack scope of the two sides of the enemy can be distinguished, and further the game experience of a user is improved.
Drawings
In order to more clearly illustrate the technical solutions of the present application or related art, the drawings that are required to be used in the description of the embodiments or related art will be briefly described below, and it is apparent that the drawings in the following description are only embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort to those of ordinary skill in the art.
Fig. 1 shows an exemplary flowchart of a display control method according to an embodiment of the present application.
FIG. 2 shows an exemplary schematic diagram of a graphical user interface in an embodiment according to the application.
Fig. 3 shows an exemplary schematic diagram of a stereothermodynamic diagram in accordance with an embodiment of the present application.
Fig. 4 shows an exemplary schematic of a planar thermodynamic diagram in accordance with an embodiment of the present application.
Fig. 5 shows an exemplary structural schematic diagram of a display control apparatus provided in an embodiment of the present application.
Fig. 6 shows an exemplary structural schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below with reference to the accompanying drawings.
It should be noted that unless otherwise defined, technical or scientific terms used in the embodiments of the present application should be given the ordinary meaning as understood by one of ordinary skill in the art to which the present application belongs. The terms "first," "second," and the like, as used in embodiments of the present application, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
As described in the background art section, when the response ranges of the virtual characters in the game scene are different, the response range of the other party cannot be known between the different virtual characters.
According to research of the inventor, in the related technology, for example, in a scene of multi-person mixed combat, the response range of the release skills of other virtual roles cannot be known, so that the current virtual roles cannot avoid attack of enemy virtual roles, and the problem of poor user experience is caused.
As such, the present application provides a display control method, apparatus, electronic device, storage medium, and program product, where a graphical user interface is provided by a terminal device, where at least a portion of a virtual scene is displayed, where the virtual scene includes a first virtual character controlled by the terminal device, and an interactive component may be provided in the graphical user interface. When a first trigger instruction for the interaction component is received, interaction information corresponding to a second virtual role in a target virtual scene area currently displayed by the graphical user interface can be obtained, a response area corresponding to the second virtual role is determined according to the interaction information, and further, the response area can be displayed. Therefore, the response areas of other virtual roles can be determined among different virtual roles, attacks of the enemy virtual roles are avoided, whether the enemy virtual roles can be attacked or not can be determined through the response area corresponding to each displayed virtual role in a multi-person mixed scene, the attack scope of the two sides of the enemy can be distinguished, and further the game experience of a user is improved.
The display control method provided by the embodiment of the present application is specifically described below by way of specific embodiments.
Fig. 1 shows an exemplary flowchart of a display control method according to an embodiment of the present application.
Referring to fig. 1, in the display control method provided in the embodiment of the present application, a graphical user interface may be provided through a terminal device, where at least a part of a virtual scene is displayed in the graphical user interface, where the virtual scene includes a first virtual character controlled by the terminal device; the method specifically comprises the following steps:
s102: an interactive component is provided in the graphical user interface.
S104: and responding to the received first trigger instruction aiming at the interaction component, acquiring interaction information corresponding to a second virtual role in a target virtual scene area currently displayed by the graphical user interface, and determining a response area corresponding to the second virtual role according to the interaction information.
S106: and displaying the response area.
In some embodiments, a user graphical interface may be provided by the terminal device, where at least a portion of a virtual scene is presented, where the virtual scene includes a first virtual character controlled by the terminal device. Multiple virtual characters may exist simultaneously in a virtual scene, such as a multi-person mixed-combat scene. The first virtual character may be a virtual character controlled by a current user, and the virtual scene displayed by the current graphical user interface is a virtual scene observed from a main view angle of the first virtual character.
FIG. 2 shows an exemplary schematic diagram of a graphical user interface in an embodiment according to the application.
In some embodiments, referring to fig. 2, after the target game is started, a first virtual character which may include a terminal device used by a current user, for example, is displayed, and the observed virtual scene and other virtual characters and environmental elements appearing in the virtual scene are taken as virtual scenes shown in a graphical user interface by taking an observation view of the first virtual character in the game as a main view angle. The other virtual roles may be virtual roles other than the first virtual role, may include virtual roles that are in the same camp as the first virtual role, or may include virtual roles that are in different camp from the first virtual role.
In some embodiments, the graphical user interface may also be displayed when a display operation instruction is received after the target game is started. The display operation instruction may include a key instruction signal generated by performing a target triggering operation on a target key of the target peripheral, where the target key may be a key or a combination of a plurality of keys on the target peripheral. For example, by striking a "K" key on a keyboard to generate a key command signal; alternatively, the key command signal may be generated by simultaneously pressing the "X" and "Y" key positions on the handle; still alternatively, a "display" button may also be presented in the graphical user interface, hovering over by moving the mouse and clicking on the "display" button to generate a key command signal.
It should be noted that, the display operation instruction may be determined by performing a click operation on the target position through the target peripheral device, or may be determined by performing a touch operation on the target position through a touch device or a hand, or may be further determined by setting an image acquisition module (for example, a camera module) in the terminal device, and then receiving, by using the image acquisition module, a gesture instruction sent by the user.
Further, whether at least two virtual roles of different camps exist in the virtual scene displayed by the graphical user interface can be determined, so that whether a scene of multi-person mixed battle exists in the virtual scene is determined. Specifically, it may be determined whether at least two different camping identifiers exist in the virtual scene, where the camping identifiers are in one-to-one correspondence with the virtual roles, and the camping identifiers may be used to indicate camps to which the virtual roles belong, and it may be understood that referring to fig. 2, the camping identifiers corresponding to different camps are different, for example, the camping identifier corresponding to the first camping is a, the camping identifier corresponding to the second camping is B, and the camping identifiers corresponding to all virtual roles in the same camping are the same, that is, the camping identifier corresponding to each virtual role in the first camping is a. If it is determined that at least two different camping identities do not exist in the first page, that is, there is no virtual character in the virtual scene or only a virtual character in the same camping as the first virtual character, the scene may be regarded as a battle-free scene, and no additional interactive component may be displayed.
It should be noted that if it is determined that at least two different camping identities do not exist in the virtual scene, that is, there is no virtual character in the virtual scene or only a virtual character in the same camping as the first virtual character, the scene may be considered as a non-battle scene, and an interaction component may be displayed, where the interaction component is triggered to display a response area of the virtual character (e.g., the second virtual character) in the same camping as the first virtual character.
In some embodiments, if it is determined that there are at least two different camping identities in the virtual scene, determining that there are at least two different virtual roles of camping in the virtual scene, for example, there is a second virtual role that is in a different camping from the first virtual role, the current scene may be determined to be a multi-person mixed-combat scene, and the interaction component may be displayed.
Further, the interaction information may include skill release information corresponding to the second virtual character. When a first trigger instruction for the interaction component is received, skill release information corresponding to a second virtual role in a target virtual scene area currently displayed by the graphical user interface can be obtained, wherein the first trigger instruction can comprise a key instruction signal generated by performing a first trigger operation on a target key of a target peripheral, and the target key can be a key or a combination of a plurality of keys on the target peripheral, for example, a key instruction signal is generated by clicking a key position 'K' on a keyboard; alternatively, the key command signal may be generated by simultaneously pressing the "X" and "Y" key positions on the handle; or further alternatively, the interaction component is hovered to and clicked by moving the mouse to generate the key instruction signal.
It should be noted that, the first trigger instruction may be determined by performing a click operation on the target position through the target peripheral device, or may be determined by performing a touch operation on the target position through a touch device or a hand, or may be further determined by setting an image acquisition module (for example, a camera module) in the terminal device, and then receiving, by using the image acquisition module, a gesture instruction sent by the user.
It should be noted that the skill release information may include a skill release range and a role orientation, where the skill release range may be used to indicate a range that the skill can be affected or affected when the second virtual role releases the corresponding skill, and further a skill interaction response area corresponding to the virtual role may be determined according to the skill release information, that is, the range that the skill can be affected or affected when the virtual role releases the skill. For example, the skill interaction response area corresponding to the second virtual role can be determined in an area covered by the skill release scope when the second virtual role is in different role orientations, and when the skill release scope of the second virtual role is a sector and the skill release direction is the role front orientation, when the second virtual role releases the skill against the first virtual role, the first virtual role cannot be influenced by the skill, but because the second virtual role may change the role orientation at the next moment, the second virtual role faces the first virtual role and releases the skill at the next moment, at this moment, the skill affects the first virtual role, so that the interaction area possibly affecting other virtual roles at any moment by the second virtual role can be determined, and the other virtual roles can be ensured to determine the skill interaction response area corresponding to the second virtual role.
It should be noted that, in general, the scope of the influence of the skills may enclose a 3D stereoscopic region, for example, the scope of the influence of the skills released by a certain virtual character may be a sphere centered on the centroid of the virtual character, and any virtual character that at least partially overlaps the sphere may be influenced by the skills released by the virtual character. Wherein the skill interaction response area may be at least part of the virtual scene centered around a centroid of the virtual character.
Still further, a response zone corresponding to the second virtual character may be determined based on the skill interaction response zone. Because the skills corresponding to each virtual character can be different, there is also a possibility that the skill corresponding to a certain virtual character is blocked by the blocking object, so that the response area corresponding to the virtual character is not a complete skill interaction response area corresponding to the virtual character, and therefore the response area corresponding to the second virtual character needs to be determined. In particular, it may be determined whether there is an occlusion element that at least partially overlaps the skill interaction response area, i.e., whether there is an occlusion element that occludes the skill interaction response area, where the occlusion element generally does not include a avatar because the skill of the avatar is typically acting on other avatars, may be an element of a building, a tree, an inherent non-player avatar (also referred to as an NPC), a ground, or the like within the game.
When it is determined that an occlusion element at least partially overlapping the skill interaction response area exists, an occlusion area between the skill interaction response area and the occlusion element may be determined, and a response area corresponding to the second virtual character may be determined according to the skill interaction response area after the occlusion area is removed. For example, when the skill interaction response area corresponding to the second virtual character is a complete sphere centered on the centroid of the second virtual character, the second virtual character stands on the ground, and no game element exists around the skill interaction response area corresponding to the second virtual character, and because the virtual character is attached to the ground, the skill interaction response area of the second virtual character is blocked by the ground, that is, a partial area of the lower hemisphere of the sphere corresponding to the second virtual character is blocked, and after the blocked area in the skill interaction response area is removed, the response area corresponding to the second virtual character can be determined.
In some embodiments, the response areas corresponding to the virtual roles of different camps may be displayed in different colors in the virtual scene, so that the user can determine that the other virtual roles belong to the camps according to the different colors, for example, determine whether the other virtual roles are friends of the first virtual role (i.e. are in the same camping as the first virtual role) or determine whether the other virtual roles are enemy of the first virtual role (i.e. are in different camps as the first virtual role) according to the different colors.
Further, the camping of the first virtual character and the camping of the second virtual character can be determined, and when the camping of the first virtual character and the camping of the second virtual character are determined to be the same, a response area corresponding to the second virtual character can be displayed in a first color, namely the first color is used for indicating that the second virtual character is the same as the camping of the first virtual character and is a friend of the first virtual character.
When the first virtual character is determined to be different from the second virtual character, the response area corresponding to the second virtual character can be displayed in the second color, that is, the second color is used for indicating that the second virtual character is different from the first virtual character and is an enemy of the first virtual character.
In addition, for the first virtual character, in order to more clearly display the response area of the other virtual characters in the terminal corresponding to the first virtual character, the response area of the first virtual character may be rendered to be transparent, so that when the response area of the first virtual character overlaps with the response area of the other virtual character, the terminal corresponding to the first virtual character may not generate the third color, thereby disturbing the user judgment.
It should be noted that, the response area of the first virtual character may be further rendered to be the first color, so that the display color corresponding to the response area of the first virtual character is the same as the display color corresponding to the response area of the virtual character of the friend, so as to facilitate the resolution of the virtual character of the friend. When the response area of the first virtual character overlaps with the response area of the virtual character of the friend, a third color formed by overlapping the first color and the first color is displayed in the terminal corresponding to the first virtual character, and similarly, when the response area of the first virtual character overlaps with the response area of the virtual character of the enemy, a fourth color formed by overlapping the first color and the second color is displayed in the terminal corresponding to the first virtual character, thereby ensuring that the virtual character of the friend and the virtual character of the enemy can still be distinguished.
Fig. 3 shows an exemplary schematic diagram of a stereothermodynamic diagram in accordance with an embodiment of the present application.
In some embodiments, in order to more clearly and intuitively display the response areas of the different virtual roles, a thermodynamic diagram may be generated according to the response areas, and the thermodynamic diagram for indicating the response areas is displayed, and the camping marks corresponding to the response areas are displayed in the thermodynamic diagram, or the response areas of the virtual roles of the different camps are distinguished by the display colors (such as the first color and the second color) corresponding to the response areas. With reference to fig. 3, a stereoscopic thermodynamic diagram may be generated according to the response area, and the stereoscopic thermodynamic diagram may be displayed in the graphical user interface, that is, the content displayed in the graphical user interface may be switched to the thermodynamic diagram display mode. For example, when only a second virtual character in the same camp with the first virtual character exists in the virtual machine scene displayed by the graphical user interface, for example, the first camp is rendered, the response area corresponding to the second virtual character is rendered, the response area of the second virtual character corresponding to the camp can be rendered into a first color, and the first color can be bound with the camp identifier corresponding to the camp, so that the first color can be used for representing the camp to which the second virtual character belongs. And setting a camping identifier corresponding to the second virtual role, for example, A, at a corresponding position of a response area corresponding to the second virtual role.
Further, when at least two virtual roles of different camps exist in the virtual scene displayed by the graphical user interface, that is, when the camping to which the second virtual role belongs is different from the camping to which the first virtual role belongs, for example, the first virtual role in the first page belongs to the first camping, and the second virtual role belongs to the second camping, a response area corresponding to the second virtual role is rendered, the response area of the second virtual role corresponding to the second camping can be rendered into a second color, and a camping identifier corresponding to the second color and the second camping can be bound, so that the second color can be used for representing that the camping to which the second virtual role belongs is the second camping, wherein the first color and the second color are different. And setting a camping identifier corresponding to the second virtual role at a corresponding position of a response area corresponding to the second virtual role, for example, a response area corresponding to the second virtual role belonging to the second camping is marked with a camping identifier B.
Fig. 4 shows an exemplary schematic of a planar thermodynamic diagram in accordance with an embodiment of the present application.
In some embodiments, map controls, such as a small map in a virtual scene, may also be included in the graphical user interface. Referring to fig. 4, a projection of a stereogram onto a target surface in a virtual scene may be determined, a planar thermodynamic diagram determined, and the planar thermodynamic diagram displayed at a target scale at a target location of a map control. For example, a projection of the stereoscopic thermodynamic diagram onto the ground in the virtual scene may be determined, thereby determining a planar thermodynamic diagram, which may be displayed in a small map for a clearer display of the stereoscopic thermodynamic diagram and for a user corresponding to the first virtual character to intuitively observe the overall tactical situation, since the stereoscopic thermodynamic diagram of the original scale is already displayed in the graphical user interface.
In some embodiments, when it is determined that the response areas corresponding to the first and second virtual characters at least partially overlap, it may be determined whether the camps to which the first and second virtual characters belong are the same. For example, when it is determined that the response area corresponding to the first virtual character and the second virtual character at least partially overlap, it is proved that the first virtual character is in the response area corresponding to the second virtual character, and it may be further determined whether the camp of the second virtual character corresponding to the response area where the first virtual character is located is the same as the camp of the first virtual character. When the first virtual character and the second virtual character are determined to belong to the same camp, the first virtual character is determined to be in a response area corresponding to the second virtual character in the same camp, and is determined to be in a safe state, first prompt information can be generated, the first prompt information can be used for indicating the response area corresponding to the second virtual character in the same camp, further, when mutual injury cannot be caused between the roles of friends, the first virtual character is proved to be in the response area corresponding to the second virtual character, and the injury of the second virtual character cannot be caused.
Further, if it is determined that the first virtual character is different from the second virtual character, and the response area where the first virtual character is located is one, it is determined that the first virtual character is located in the response area corresponding to the second virtual character of at least one different camp, and is considered as a dangerous state, second prompt information may be generated, where the second prompt information may be used to instruct the first virtual character to be located in the response area corresponding to the second virtual character of the different camp, and prompt the first virtual character to notice and avoid an enemy attack.
Still further, determining that the response area in which the first virtual character is located is at least two, that is, the first virtual character is located in response areas corresponding to second virtual characters of at least two different camps, proving that at least one of the response areas in which the first virtual character is located is different camps, and identifying the first virtual character as a dangerous state, generating second prompt information, where the second prompt information can be used for indicating that the first virtual character is located in response areas corresponding to the second virtual characters of at least two different camps, and prompting the first virtual character to notice and avoid enemy attack.
Note that, the prompt information (for example, the first prompt information or the second prompt information) may be displayed in a graphical user interface corresponding to the first virtual character, so as to prompt the first virtual character to notice and avoid the enemy attack. Because the response areas corresponding to different camps are marked by different colors, the positions of the response areas of the different camps can be observed in the thermodynamic diagram, when the colors of a certain area are overlapped deeply, the area is proved to be an overlapped part of the response areas of a plurality of different camps and possibly attacked by multiple parties, so that a user can more clearly distinguish whether the position of the user in the current battle is a safe position or not, and the game experience of the user is improved.
In some embodiments, upon receiving a second trigger instruction for the interactive component, the thermodynamic diagram indicating the response area may be hidden in the first page. The second trigger instruction may include a key instruction signal generated by performing a second trigger operation on a target key of the target peripheral, where the target key may be a key or a combination of a plurality of keys on the target peripheral, for example, by clicking a "K" key on the keyboard to generate the key instruction signal; alternatively, the key command signal may be generated by simultaneously pressing the "X" and "Y" key positions on the handle; or further alternatively, the interaction component is hovered to and clicked by moving the mouse to generate the key instruction signal.
It should be noted that the second trigger instruction may be determined by performing a click operation on the target position through the target peripheral device, or may be determined by performing a touch operation on the target position through a touch device or a hand, or may be further determined by setting an image acquisition module (for example, a camera module) in the terminal device, and further receiving, by using the image acquisition module, a gesture instruction sent by the user.
As can be seen from the foregoing, the display control method, apparatus, electronic device, storage medium and program product provided in the present application provide a graphical user interface through a terminal device, where at least a part of a virtual scene is displayed in the graphical user interface, where the virtual scene includes a first virtual character controlled by the terminal device, and an interaction component may be provided in the graphical user interface. When a first trigger instruction for the interaction component is received, interaction information corresponding to a second virtual role in a target virtual scene area currently displayed by the graphical user interface can be obtained, a response area corresponding to the second virtual role is determined according to the interaction information, and further, the response area can be displayed. Therefore, the response areas of other virtual roles can be determined among different virtual roles, attacks of the enemy virtual roles are avoided, whether the enemy virtual roles can be attacked or not can be determined through the response area corresponding to each displayed virtual role in a multi-person mixed scene, the attack scope of the two sides of the enemy can be distinguished, and further the game experience of a user is improved.
It should be noted that, the method of the embodiments of the present application may be performed by a single device, for example, a computer or a server. The method of the embodiment can also be applied to a distributed scene, and is completed by mutually matching a plurality of devices. In the case of such a distributed scenario, one of the devices may perform only one or more steps of the methods of embodiments of the present application, and the devices may interact with each other to complete the methods.
It should be noted that some embodiments of the present application are described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Fig. 5 shows an exemplary structural schematic diagram of a display control apparatus provided in an embodiment of the present application.
Based on the same inventive concept, the application also provides a display control device corresponding to the method of any embodiment.
Referring to fig. 5, the display control apparatus provides a graphical user interface through a terminal device, where at least part of a virtual scene is displayed in the graphical user interface, and the virtual scene includes a first virtual character controlled by the terminal device; the device comprises: the device comprises a first display module, a determining module and a second display module; wherein, the liquid crystal display device comprises a liquid crystal display device,
a first display module configured to provide an interactive component in the graphical user interface;
the determining module is configured to respond to receiving a first trigger instruction for the interaction component, acquire interaction information corresponding to a second virtual role in a target virtual scene area currently displayed by the graphical user interface, and determine a response area corresponding to the second virtual role according to the interaction information;
and a second display module configured to display the response area.
In one possible implementation, the interaction information includes: skill release information corresponding to the second virtual character;
the determination module is configured to:
Responding to the first trigger instruction aiming at the interaction component, and acquiring skill release information corresponding to the second virtual role in a target virtual scene area currently displayed by the graphical user interface; the first trigger instruction comprises a trigger instruction signal generated by performing a first trigger operation;
determining a skill interaction response area corresponding to the second virtual role according to the skill release information;
and determining a response area corresponding to the second virtual role according to the skill interaction response area.
In one possible implementation, the skill release information includes: skill release ranges and character orientations;
the determination module is further configured to:
and determining a skill interaction response area corresponding to the second virtual role according to the area covered by the skill release range when the second virtual role is in different role orientations.
In one possible implementation of the present invention,
the determination module is further configured to:
determining whether there is an occlusion element that at least partially overlaps the skill interaction response area;
responsive to the presence of an occlusion element that at least partially overlaps the skill interaction response area, determining an occlusion area between the skill interaction response area and the occlusion element, and determining a response area corresponding to the second virtual character from the skill interaction response area excluding the occlusion area.
In one possible implementation, the skill interaction response area is at least a portion of a virtual scene centered around the virtual character.
In one possible implementation, the second display module is further configured to:
determining the camping of the first virtual role and the camping of the second virtual role;
and in response to the fact that the camping to which the first virtual character belongs is the same as the camping to which the second virtual character belongs, displaying a response area corresponding to the second virtual character in a first color.
In one possible implementation, the second display module is further configured to:
responding to the fact that the camping of the first virtual role is different from the camping of the second virtual role, and displaying a response area corresponding to the second virtual role in a second color; wherein the first color is different from the second color.
In one possible implementation manner, the apparatus further includes: a generating module;
the generation module is configured to:
generating prompt information in response to the response area corresponding to the first virtual character and the second virtual character being at least partially overlapped, and displaying the alarm information in a user graphical interface corresponding to the first virtual character; the prompt information is used for indicating that the first virtual role is in a response area corresponding to the second virtual role.
In one possible implementation, the prompt information includes: a first prompt message;
the generation module is further configured to:
generating the first prompt information in response to the response area corresponding to the first virtual character and the second virtual character being at least partially overlapped, wherein the camping of the first virtual character is the same as that of the second virtual character; the first prompt information is used for indicating a response area corresponding to a second virtual character with the same camping of the first virtual character.
In one possible implementation manner, the prompt information further includes: a second prompt message;
the generation module is further configured to:
generating the first prompt information in response to the response area corresponding to the first virtual character and the second virtual character being at least partially overlapped, wherein the camping of the first virtual character is different from that of the second virtual character; the second prompt information is used for indicating a response area corresponding to a second virtual character with the first virtual character in different camps.
In one possible implementation, the second display module is further configured to:
Generating a thermodynamic diagram according to the response area, and displaying the thermodynamic diagram for indicating the response area.
In one possible implementation, the thermodynamic diagram includes: a solid thermodynamic diagram and a planar thermodynamic diagram; the graphical user interface further comprises: map controls;
the second display module is further configured to:
generating a stereoscopic thermodynamic diagram according to the response area, and displaying the stereoscopic thermodynamic diagram in the user graphical interface;
and determining the projection of the stereoscopic thermodynamic diagram on a target surface in the virtual scene to determine the planar thermodynamic diagram, and displaying the planar thermodynamic diagram at a target position of the map control in a target scale.
In one possible implementation manner, the apparatus further includes: a hiding module;
the concealment module is configured to:
hiding a thermodynamic diagram indicating the response area in the first page in response to receiving a second trigger instruction for the interaction component; the second trigger instruction comprises a trigger instruction signal generated by performing a second trigger operation.
For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, the functions of each module may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
The device of the foregoing embodiment is configured to implement the corresponding display control method in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
Fig. 6 shows an exemplary structural schematic diagram of an electronic device according to an embodiment of the present application.
Based on the same inventive concept, the application also provides an electronic device corresponding to the method of any embodiment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the display control method of any embodiment when executing the program. Fig. 6 shows a more specific hardware architecture of an electronic device according to this embodiment, where the device may include: processor 610, memory 620, input/output interface 630, communication interface 640, and bus 650. Wherein processor 610, memory 620, input/output interface 630, and communication interface 640 enable communication connections among each other within the device via bus 650.
The processor 610 may be implemented by a general-purpose CPU (Central Processing Unit ), microprocessor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing relevant programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 620 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), a static storage device, a dynamic storage device, or the like. Memory 620 may store an operating system and other application programs, and when the technical solutions provided by the embodiments of the present specification are implemented in software or firmware, relevant program codes are stored in memory 620 and invoked for execution by processor 610.
The input/output interface 630 is used for connecting with an input/output module to realize information input and output. The input/output module may be configured as a component in a device (not shown in the figure) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
The communication interface 640 is used to connect a communication module (not shown in the figure) to enable communication interaction between the present device and other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 650 includes a path to transfer information between components of the device (e.g., processor 610, memory 620, input/output interface 630, and communication interface 640).
It should be noted that although the above device only shows the processor 610, the memory 620, the input/output interface 630, the communication interface 640, and the bus 650, in the implementation, the device may further include other components necessary for achieving normal operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
The electronic device of the foregoing embodiment is configured to implement the corresponding display control method in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
Based on the same inventive concept, corresponding to any of the above embodiments of the method, the present application further provides a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the display control method according to any of the above embodiments.
The computer readable media of the present embodiments, including both permanent and non-permanent, removable and non-removable media, may be used to implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The storage medium of the foregoing embodiments stores computer instructions for causing the computer to execute the display control method according to any one of the foregoing embodiments, and has the advantages of the corresponding method embodiments, which are not described herein.
Based on the same inventive concept, the present disclosure also provides a computer program product corresponding to the display control method described in any of the above embodiments, which includes computer program instructions. In some embodiments, the computer program instructions may be executed by one or more processors of a computer to cause the computer and/or the processor to perform the display control method. Corresponding to the execution subject corresponding to each step in each embodiment of the display control method, the processor executing the corresponding step may belong to the corresponding execution subject.
The computer program product of the above embodiment is configured to enable the computer and/or the processor to perform the display control method according to any one of the above embodiments, and has the beneficial effects of corresponding method embodiments, which are not described herein again.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of the application (including the claims) is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the present application, the steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present application as described above, which are not provided in detail for the sake of brevity.
Additionally, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown within the provided figures, in order to simplify the illustration and discussion, and so as not to obscure the embodiments of the present application. Furthermore, the devices may be shown in block diagram form in order to avoid obscuring the embodiments of the present application, and this also takes into account the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform on which the embodiments of the present application are to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the application, it should be apparent to one skilled in the art that embodiments of the application can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative in nature and not as restrictive.
While the present application has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of those embodiments will be apparent to those skilled in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic RAM (DRAM)) may use the embodiments discussed.
The present embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Accordingly, any omissions, modifications, equivalents, improvements and/or the like which are within the spirit and principles of the embodiments are intended to be included within the scope of the present application.

Claims (17)

1. A display control method is characterized in that a graphical user interface is provided through a terminal device, at least part of virtual scenes are displayed in the graphical user interface, and the virtual scenes comprise first virtual roles controlled by the terminal device; the method comprises the following steps:
providing an interaction component in the graphical user interface;
in response to receiving a first trigger instruction for the interaction component, acquiring interaction information corresponding to a second virtual role in a target virtual scene area currently displayed by the graphical user interface, and determining a response area corresponding to the second virtual role according to the interaction information;
And displaying the response area.
2. The method of claim 1, wherein the interaction information comprises: skill release information corresponding to the second virtual character;
the responding to the receiving of the first trigger instruction for the interaction component, obtaining interaction information corresponding to the second virtual role in the target virtual scene area currently displayed by the graphical user interface, and determining a response area corresponding to the second virtual role according to the interaction information, including:
responding to the first trigger instruction aiming at the interaction component, and acquiring skill release information corresponding to the second virtual role in a target virtual scene area currently displayed by the graphical user interface; the first trigger instruction comprises a trigger instruction signal generated by performing a first trigger operation;
determining a skill interaction response area corresponding to the second virtual role according to the skill release information;
and determining a response area corresponding to the second virtual role according to the skill interaction response area.
3. The method of claim 2, wherein the skill release information comprises: skill release ranges and character orientations;
The determining a skill interaction response area corresponding to the second virtual role according to the skill release information comprises the following steps:
and determining a skill interaction response area corresponding to the second virtual role according to the area covered by the skill release range when the second virtual role is in different role orientations.
4. The method of claim 2, wherein the determining a response zone corresponding to the second virtual character from the skill interaction response zone comprises:
determining whether there is an occlusion element that at least partially overlaps the skill interaction response area;
responsive to the presence of an occlusion element that at least partially overlaps the skill interaction response area, determining an occlusion area between the skill interaction response area and the occlusion element, and determining a response area corresponding to the second virtual character from the skill interaction response area excluding the occlusion area.
5. The method of claim 2, wherein the skill interaction response area is at least a portion of a virtual scene centered about the virtual character.
6. The method of claim 1, wherein the displaying the response area comprises:
Determining the camping of the first virtual role and the camping of the second virtual role;
and in response to the fact that the camping to which the first virtual character belongs is the same as the camping to which the second virtual character belongs, displaying a response area corresponding to the second virtual character in a first color.
7. The method of claim 6, wherein the determining the camp to which the first virtual character belongs and the camp to which the second virtual character belongs further comprises:
responding to the fact that the camping of the first virtual role is different from the camping of the second virtual role, and displaying a response area corresponding to the second virtual role in a second color; wherein the first color is different from the second color.
8. The method according to claim 6 or 7, characterized in that the method further comprises:
generating prompt information in response to the response area corresponding to the first virtual character and the second virtual character being at least partially overlapped, and displaying the alarm information in a user graphical interface corresponding to the first virtual character; the prompt information is used for indicating that the first virtual role is in a response area corresponding to the second virtual role.
9. The method of claim 8, wherein the hint information comprises: a first prompt message;
and generating prompt information if the response area corresponding to the first virtual character and the second virtual character at least partially overlap, wherein the prompt information comprises the following steps:
generating the first prompt information in response to the response area corresponding to the first virtual character and the second virtual character being at least partially overlapped, wherein the camping of the first virtual character is the same as that of the second virtual character; the first prompt information is used for indicating a response area corresponding to a second virtual character with the same camping of the first virtual character.
10. The method of claim 8, wherein the hint information further comprises: a second prompt message;
and generating prompt information if the response area corresponding to the first virtual character and the second virtual character at least partially overlap, wherein the prompt information comprises the following steps:
generating the first prompt information in response to the response area corresponding to the first virtual character and the second virtual character being at least partially overlapped, wherein the camping of the first virtual character is different from that of the second virtual character; the second prompt information is used for indicating a response area corresponding to a second virtual character with the first virtual character in different camps.
11. The method according to claim 1, characterized in that the method further comprises:
generating a thermodynamic diagram according to the response area, and displaying the thermodynamic diagram for indicating the response area.
12. The method of claim 11, wherein the thermodynamic diagram comprises: a solid thermodynamic diagram and a planar thermodynamic diagram; the graphical user interface further comprises: map controls;
the generating a thermodynamic diagram according to the response area and displaying the thermodynamic diagram for indicating the response area comprises the following steps:
generating a stereoscopic thermodynamic diagram according to the response area, and displaying the stereoscopic thermodynamic diagram in the user graphical interface;
and determining the projection of the stereoscopic thermodynamic diagram on a target surface in the virtual scene to determine the planar thermodynamic diagram, and displaying the planar thermodynamic diagram at a target position of the map control in a target scale.
13. The method of claim 11, wherein after generating a thermodynamic diagram from the response area and displaying the thermodynamic diagram for indicating the response area, further comprising:
hiding, in response to receiving a second trigger instruction for the interactive component, a thermodynamic diagram indicating the response area in the graphical user interface; the second trigger instruction comprises a trigger instruction signal generated by performing a second trigger operation.
14. A display control device is characterized in that a graphical user interface is provided through a terminal device, at least part of virtual scenes are displayed in the graphical user interface, and the virtual scenes comprise first virtual roles controlled by the terminal device; the device comprises:
a first display module configured to provide an interactive component in the graphical user interface;
the determining module is configured to respond to receiving a first trigger instruction for the interaction component, acquire interaction information corresponding to a second virtual role in a target virtual scene area currently displayed by the graphical user interface, and determine a response area corresponding to the second virtual role according to the interaction information;
and a second display module configured to display the response area.
15. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 13 when the program is executed by the processor.
16. A computer readable storage medium storing computer instructions for causing the computer to implement the method of any one of claims 1 to 13.
17. A computer program product comprising computer program instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 13.
CN202310357111.7A 2023-03-30 2023-03-30 Display control method, display control device, electronic apparatus, storage medium, and program product Pending CN116474366A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310357111.7A CN116474366A (en) 2023-03-30 2023-03-30 Display control method, display control device, electronic apparatus, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310357111.7A CN116474366A (en) 2023-03-30 2023-03-30 Display control method, display control device, electronic apparatus, storage medium, and program product

Publications (1)

Publication Number Publication Date
CN116474366A true CN116474366A (en) 2023-07-25

Family

ID=87220541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310357111.7A Pending CN116474366A (en) 2023-03-30 2023-03-30 Display control method, display control device, electronic apparatus, storage medium, and program product

Country Status (1)

Country Link
CN (1) CN116474366A (en)

Similar Documents

Publication Publication Date Title
CN109529319B (en) Display method and device of interface control and storage medium
CN111589133B (en) Virtual object control method, device, equipment and storage medium
CN110141859B (en) Virtual object control method, device, terminal and storage medium
CN110917616B (en) Orientation prompting method, device, equipment and storage medium in virtual scene
CN111035918A (en) Reconnaissance interface display method and device based on virtual environment and readable storage medium
CN111013142A (en) Interactive effect display method and device, computer equipment and storage medium
CN111589136B (en) Virtual object control method and device, computer equipment and storage medium
CN113289331B (en) Display method and device of virtual prop, electronic equipment and storage medium
CN111821691A (en) Interface display method, device, terminal and storage medium
CN110496392B (en) Virtual object control method, device, terminal and storage medium
CN111325822B (en) Method, device and equipment for displaying hot spot diagram and readable storage medium
CN112755527A (en) Virtual character display method, device, equipment and storage medium
CN111672126A (en) Information display method, device, equipment and storage medium
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN113041620B (en) Method, device, equipment and storage medium for displaying position mark
CN112704876B (en) Method, device and equipment for selecting virtual object interaction mode and storage medium
WO2023103615A1 (en) Virtual object switching method and apparatus, device, medium and program product
CN112691372A (en) Virtual item display method, device, equipment and readable storage medium
CN113577765A (en) User interface display method, device, equipment and storage medium
CN112870699A (en) Information display method, device, equipment and medium in virtual environment
CN110801629B (en) Method, device, terminal and medium for displaying virtual object life value prompt graph
TWI803224B (en) Contact person message display method, device, electronic apparatus, computer readable storage medium, and computer program product
CN113289336A (en) Method, apparatus, device and medium for tagging items in a virtual environment
CN111035929B (en) Elimination information feedback method, device, equipment and medium based on virtual environment
CN112316423A (en) Method, device, equipment and medium for displaying state change of virtual object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination