CN115040868A - Prompt information generation method, area adjustment method and device - Google Patents

Prompt information generation method, area adjustment method and device Download PDF

Info

Publication number
CN115040868A
CN115040868A CN202210686321.6A CN202210686321A CN115040868A CN 115040868 A CN115040868 A CN 115040868A CN 202210686321 A CN202210686321 A CN 202210686321A CN 115040868 A CN115040868 A CN 115040868A
Authority
CN
China
Prior art keywords
virtual
skill release
appearance
difference
virtual appearance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210686321.6A
Other languages
Chinese (zh)
Inventor
辛一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210686321.6A priority Critical patent/CN115040868A/en
Publication of CN115040868A publication Critical patent/CN115040868A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a prompt message generation method, a region adjustment method and a device; the method and the device for the skill release of the virtual character can acquire a first virtual appearance and a second virtual appearance of the virtual character and a skill release difference function associated with a user; determining a first element and a second element; determining an element difference value, wherein the element difference value is a numerical difference value between the second element and the first element; predicting skill release difference according to the skill release difference function and the element difference value, wherein the skill release difference represents the precision difference of skill release of the virtual character adopting the second virtual appearance relative to the skill release of the virtual character adopting the first virtual appearance; and generating prompt information according to the skill release difference so as to prompt the user. In the embodiment of the application, the user can know the precision difference released by the virtual character when the second virtual appearance is adopted relative to the skill when the first virtual appearance is adopted, and the user can know the virtual appearance adopted by the virtual character.

Description

Prompt information generation method, area adjustment method and device
Technical Field
The present application relates to the field of computers, and in particular, to a method for generating a prompt message, a method for adjusting a region, and an apparatus for adjusting a region.
Background
A User Interface (UI) is a medium for interaction and information exchange between a system and a User, and it realizes conversion between an internal form of information and a human-acceptable form. For example, the game UI may include skill icons, prop icons, etc., and the user may control the virtual character to perform a particular game behavior through a particular game icon, wherein the virtual character may employ a variety of different game skins.
However, different game skins may affect the user's manipulation of the virtual character, and thus affect the skill release of the virtual character, and therefore, it is necessary to improve the user's understanding of the virtual appearance adopted by the virtual character.
Disclosure of Invention
The embodiment of the application provides a prompt message generation method, a region adjustment method and a device, which can improve the understanding of a user on the virtual appearance adopted by a virtual character.
The embodiment of the application provides a prompt message generation method, which is applied to games, wherein the games comprise virtual roles, the virtual roles are controlled by a user through terminal equipment, and the method comprises the following steps:
obtaining a first virtual appearance and a second virtual appearance of the virtual character, and a skill release difference function associated with the user, the second virtual appearance being used to replace the first virtual appearance;
determining a first element and a second element, the first element being a primary visual element in the first virtual appearance and the second element being a primary visual element in the second virtual appearance;
determining an element difference value, wherein the element difference value is a numerical difference value between the second element and the first element;
predicting skill release difference according to the skill release difference function and the element difference value, wherein the skill release difference represents the precision difference of skill release of the virtual character adopting the second virtual appearance relative to the skill release of the virtual character adopting the first virtual appearance;
and generating prompt information according to the skill release difference so as to prompt the user.
The embodiment of the present application further provides a device for generating prompt information, and the method is applied to a game, the game includes virtual characters, the virtual characters are controlled by a user through a terminal device, and the method includes:
a first acquisition unit configured to acquire a first virtual appearance and a second virtual appearance of the virtual character, and a skill release difference function associated with the user, the second virtual appearance being used to replace the first virtual appearance;
an element determination unit for determining a first element and a second element, the first element being a primary visual element in the first virtual appearance and the second element being a primary visual element in the second virtual appearance;
a difference determining unit for determining an element difference, the element difference being a numerical difference between the second element and the first element;
the difference prediction unit is used for predicting skill release difference according to the skill release difference function and the element difference value, and the skill release difference represents the precision difference of the skill release of the virtual character adopting the second virtual appearance relative to the skill release of the virtual character adopting the first virtual appearance;
and the prompting unit is used for generating prompting information according to the skill release difference so as to prompt the user.
In some embodiments, the visual element comprises a pixel, the first element and the second element are determined, the method comprising:
obtaining color values of all pixels of the first virtual appearance and color values of all pixels of the second virtual appearance;
the first element is determined from all pixels of the first virtual appearance and the second element is determined from all pixels of the second virtual appearance according to the number of pixels of the same color value.
In some embodiments, the visual element comprises an arc, the first element and the second element are determined, and the method comprises:
acquiring all arcs forming the first virtual appearance outline and all arcs forming the second virtual appearance outline;
the first element is determined from all arcs constituting the first virtual appearance contour and the second element is determined from all arcs constituting the second virtual appearance contour according to the number of arcs of the same arc.
In some embodiments, the first element is determined from all arcs making up the first virtual appearance contour and the second element is determined from all arcs making up the second virtual appearance contour according to the number of arcs of the same arc, the method comprising:
determining a first candidate arc from all arcs constituting the first virtual appearance contour and a second candidate arc from all arcs constituting the second virtual appearance contour according to the number of arcs of the same radian;
determining a first element from the first candidate arcs according to the radian of each first candidate arc;
determining a first element from the second candidate arcs according to the radians of all the second candidate arcs.
In some embodiments, the radians of all arcs making up the first virtual appearance profile and the radians of all arcs making up the second virtual appearance profile are obtained, the method comprising:
acquiring an image with a first virtual appearance, an image with a second virtual appearance and a background image, wherein the image color of the background image is the same as the background color in the image with the first virtual appearance and the image with the second virtual appearance;
splitting the image with the first virtual appearance, the image with the second virtual appearance and the background image into a plurality of sub-images by adopting a preset splitting rule;
determining a first target sub-image from the plurality of first sub-images according to the color difference value between the first sub-image and the background sub-image, wherein the first sub-image is a sub-image in an image with a first virtual appearance, the background sub-image is a sub-image in the background image, and all color difference values corresponding to the first target sub-image comprise a color difference value equal to zero and a color difference value not equal to zero;
determining a second target sub-image from the plurality of second sub-images according to the color difference value between the second sub-image and the background sub-image, wherein the second sub-image is a sub-image in an image with a second virtual appearance, and all color difference values corresponding to the second target sub-image comprise a color difference value equal to zero and a color difference value not equal to zero;
and respectively carrying out angle detection on the corresponding areas of the color difference value which is not zero in the first target sub-image and the second target sub-image to obtain the arc radian forming the first virtual appearance outline and the arc radian forming the second virtual appearance outline.
In some embodiments, the primary visual element comprises a pixel and the element difference is a color difference of the pixel.
In some embodiments, the primary visual element further comprises an arc, the element difference being an angular difference of the arc.
In some embodiments, the skill release difference function comprises a first function and a second function, the skill release difference is predicted from the skill release difference function and the element difference value, the method comprising:
predicting a first skill release difference according to the first function and the color difference value of the pixel;
predicting a second skill release difference according to the second function and the angle difference of the arc;
the first skill release difference and the second skill release difference are taken as skill release differences.
In some embodiments, prior to obtaining the skill release difference function associated with the user, the method further comprises:
acquiring a historical virtual appearance of the virtual character, wherein the historical virtual appearance is a second virtual appearance adopted by the virtual character in historical time;
determining a history element difference value, wherein the history element difference value is a numerical difference value between a main visual element and a first element in the history virtual appearance;
determining historical skill release difference, wherein the historical skill release difference is the precision difference of skill release when the virtual character adopts the historical virtual appearance relative to the skill release when the virtual character adopts the first virtual appearance;
and constructing a skill release difference function according to the historical skill release difference and the historical element difference.
In some embodiments, a historical skill release variance is determined, the method comprising:
acquiring the total number and the number of hits of the released skills when the virtual character adopts a first virtual appearance, and acquiring the total number and the number of hits of the released skills when the virtual character adopts a historical virtual appearance;
determining the precision of the skill release when the virtual character adopts the first virtual appearance according to the ratio of the number of hits of the skill release when the virtual character adopts the first virtual appearance to the total number;
and determining the precision of the skill release when the virtual character adopts the historical virtual appearance according to the ratio of the number of hits of the skill release when the virtual character adopts the historical virtual appearance to the total number.
Determining a historical skill release difference according to a ratio of the precision of the skill release when the virtual character adopts the historical virtual appearance to the precision of the skill release when the virtual character adopts the first virtual appearance.
The embodiment of the application provides a region adjustment method, a terminal device provides a graphical user interface, the content of the graphical user interface at least partially comprises a game scene and a first virtual character and a second virtual character in the game scene, the first virtual character is a virtual character controlled by the user through the terminal device, and the method comprises the following steps:
acquiring a skill release deviation function associated with a user, and a first virtual appearance, a second virtual appearance and an element difference value in any prompt information generation method provided by the embodiment of the application;
predicting skill release deviation according to a skill release deviation function and the element difference value, wherein the skill release deviation represents the angle deviation of a skill release area when the first virtual character adopts the second virtual appearance relative to the skill release area when the first virtual appearance is adopted;
acquiring a skill releasing direction in a game scene when the first virtual character adopts the second virtual appearance and a target area, wherein the target area is a skill releasing area in the direction when the first virtual character adopts the second virtual appearance;
determining a target angle, wherein the target angle is an included angle between a boundary of a target area in a game scene and a reference line, and the reference line is a connection line between a first virtual character and a second virtual character;
and according to the target angle and the skill release deviation, performing region adjustment on the target region to obtain an updated region, and updating the region to enable the first virtual character to release the skill in the direction to hit the second virtual character.
The embodiment of the present application provides a region adjusting apparatus, which provides a graphical user interface through a terminal device, where the content of the graphical user interface at least partially includes a game scene and a first virtual character and a second virtual character therein, and the first virtual character is a virtual character controlled by a user through the terminal device, and the apparatus includes:
the second obtaining unit is used for obtaining a skill release deviation function associated with a user and a first virtual appearance, a second virtual appearance and an element difference value in any prompt information generation method provided by the embodiment of the application;
the deviation determining unit is used for predicting skill release deviation according to the skill release deviation function and the element difference value, and the skill release deviation represents the angle deviation of a skill release area of the first virtual character when the second virtual appearance is adopted relative to the skill release area when the first virtual appearance is adopted;
a third obtaining unit, configured to obtain a direction in which the skill is released in the game scene when the first virtual character adopts the second virtual appearance, and a target area, where the target area is an area in which the skill is released in the direction when the first virtual character adopts the second virtual appearance;
the angle determining unit is used for determining a target angle, the target angle is an included angle between a boundary of a target area in a game scene and a reference line, and the reference line is a connection line between a first virtual character and a second virtual character;
and the area adjusting unit is used for performing area adjustment on the target area according to the target angle and the skill release deviation to obtain an updated area, and the updated area enables the first virtual character to release the skill in the direction to hit the second virtual character.
In some embodiments, a target angle is determined, the method comprising:
determining a first angle, wherein the first angle is an included angle between the direction and a datum line;
determining a second angle, wherein the second angle is an included angle between the direction and the boundary of the target area;
a target angle is determined based on a difference between the first angle and the second angle.
In some embodiments, prior to obtaining the skill release bias function associated with the user, the method further comprises:
acquiring a difference value between a historical virtual appearance and a historical element in the prompt information generation method provided by the embodiment of the application;
determining historical skill release deviation, wherein the historical skill release deviation is the angle deviation of a skill release area when the first virtual character adopts the historical virtual appearance relative to the skill release area when the first virtual appearance is adopted;
and constructing a skill release deviation function according to the historical skill release deviation and the historical element difference value.
The embodiment of the application also provides electronic equipment, which comprises a memory, a storage and a control unit, wherein the memory stores a plurality of instructions; the processor loads instructions from the memory to execute the steps of any prompt information generation method and any region adjustment method provided by the embodiment of the application.
The embodiment of the present application further provides a computer-readable storage medium, where multiple instructions are stored in the computer-readable storage medium, and the instructions are suitable for being loaded by a processor to execute any one of the prompt information generating methods and any one of the steps in the area adjusting method provided in the embodiment of the present application.
The method and the device for the skill release of the virtual character can acquire a first virtual appearance and a second virtual appearance of the virtual character and a skill release difference function associated with a user, wherein the second virtual appearance is used for replacing the first virtual appearance; determining a first element and a second element, wherein the first element is a main visual element in the first virtual appearance, the second element is a main visual element in the second virtual appearance, and the main visual element is a visual element with the largest proportion; determining an element difference value, wherein the element difference value is a numerical difference value between the second element and the first element; predicting skill release difference according to the skill release difference function and the element difference value, wherein the skill release difference represents the precision difference of skill release of the virtual character adopting the second virtual appearance relative to the skill release of the virtual character adopting the first virtual appearance; and generating prompt information according to the skill release difference so as to prompt the user.
According to the method and the device, the main visual elements, namely the first elements, in the first virtual appearance can be determined, the main visual elements, namely the second elements, in the second virtual appearance are determined, the element difference value can represent the difference between the main visual elements of the two virtual appearances through the numerical difference value between the second elements and the first elements, the skill release difference between the two virtual appearances relevant to the user is predicted through the element difference value and the skill release difference function relevant to the user, and the prompt information prompting the user is generated according to the skill release difference, so that the user can know the precision difference of the virtual character in the release of the skill between the first virtual appearances when the second virtual appearance is adopted, and the understanding of the virtual appearance adopted by the user is improved.
The embodiment of the application can release the deviation function by acquiring the skill associated with the user and the first virtual appearance, the second virtual appearance and the element difference value in the prompt information generation method provided by the embodiment; predicting skill release deviation according to a skill release deviation function and the element difference value, wherein the skill release deviation represents the angle deviation of a skill release area of the first virtual character in the second virtual appearance relative to the skill release area in the first virtual appearance; the method comprises the steps of obtaining a skill releasing direction in a game scene when a first virtual character adopts a second virtual appearance and a target area, wherein the target area is a skill releasing area in the direction when the first virtual character adopts the second virtual appearance; determining a target angle, wherein the target angle is an included angle between a boundary of a target area in a game scene and a reference line, and the reference line is a connection line between the position of a first virtual character and the position of a second virtual character; and according to the target angle and the skill release deviation, performing region adjustment on the target region to obtain an updated region, and updating the region to enable the first virtual character to release the skill in the direction to hit the second virtual character.
In the application, the angle deviation of the skill release area between two virtual appearances associated with a user can be predicted through an element difference value and a skill release deviation function associated with the user, namely the skill release deviation is predicted, meanwhile, the direction of the first virtual character releasing the skill and a target area are obtained, the target area is the skill release area in the direction when the first virtual character adopts the second virtual appearance, the included angle between the boundary of the target area and a reference line is measured, the reference line is a connecting line between the first virtual character and the second virtual character, when the absolute value of the target angle is smaller than the absolute value of the skill release deviation, the skill release deviation is added on the basis of the target area, so that the area of the target area is adjusted, an updated area is obtained, and the second virtual character can be hit by the skill release of the first virtual object in the direction by adopting the second virtual appearance, therefore, the skill release of the first virtual character influenced by the virtual appearance is reduced, and the user can learn about the virtual appearance adopted by the virtual character.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1a is a scene schematic diagram of a prompt message generation method provided in an embodiment of the present application;
fig. 1b is a schematic scene diagram of a region adjustment method according to an embodiment of the present application;
fig. 1c is a schematic flowchart of a prompt message generation method provided in the embodiment of the present application;
fig. 2a is a schematic flowchart of a region adjustment method according to an embodiment of the present application;
FIG. 2b is a schematic diagram of a scenario of skill release bias provided by an embodiment of the present application;
fig. 3 is a schematic structural diagram of a prompt information generating apparatus according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a region adjustment apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a prompt message generation method, a prompt message generation device, electronic equipment and a storage medium.
The prompt information generating device may be specifically integrated in an electronic device, and the electronic device may be a terminal, a server, or other devices. The terminal can be a mobile phone, a tablet Computer, an intelligent bluetooth device, a notebook Computer, or a Personal Computer (PC), and the like; the server may be a single server or a server cluster composed of a plurality of servers.
In some embodiments, the prompt information generating apparatus may also be integrated in a plurality of electronic devices, for example, the prompt information generating apparatus may be integrated in a plurality of servers, and the prompt information generating method of the present application is implemented by the plurality of servers.
In some embodiments, the server may also be implemented in the form of a terminal.
At present, in an electronic game, a same virtual character can adopt various virtual appearances, and the virtual character can bring different game experiences to a user when adopting different virtual appearances, for example, the game experience can refer to a game effect, and specifically can be precision of skill release. Different people have different visual sensitivities, so that different game experiences can be brought to different users by the same virtual appearance, and therefore, when the user controls the virtual character to adopt a new virtual appearance through the terminal equipment, the user cannot know the game effect brought by the virtual appearance, and the user is influenced to control the virtual character through the terminal equipment.
Since the user cannot intuitively know the game effect of the virtual appearance at present, the embodiment provides a method for generating the prompt information, where the electronic device is a server, and the server can run an electronic game, where the electronic game includes virtual characters, and the virtual characters are controlled by the user through a terminal device.
Then, referring to fig. 1a, the server may obtain, over the network, a first virtual appearance and a second virtual appearance of the virtual character, the second virtual appearance being used to replace the first virtual appearance, and a skill release difference function associated with the user; determining a first element and a second element, wherein the first element is a main visual element in the first virtual appearance, the second element is a main visual element in the second virtual appearance, and the main visual element is a visual element with the largest proportion; determining an element difference value, wherein the element difference value is a numerical difference value between the second element and the first element; predicting skill release difference according to the skill release difference function and the element difference value, wherein the skill release difference represents the difference of the accuracy of skill release of the virtual character adopting the second virtual appearance relative to the accuracy of skill release of the virtual character adopting the first virtual appearance; and generating prompt information according to the skill release difference so as to prompt the user.
According to the method, the skill release difference associated with the user can be predicted, the skill release difference can generate prompt information prompting the user, so that the user can know the precision difference of the second virtual appearance relative to the skill release of the first virtual appearance, and the user can know the second virtual appearance adopted by the virtual character.
Since it is difficult for a user to overcome an influence of a virtual appearance on release of skills at present, this embodiment provides a region adjustment method, where the electronic device is a terminal device, an electronic game may be run in the terminal device, a graphical user interface may be displayed on a screen of the terminal device, content of the graphical user interface at least partially includes a game scene and a first virtual character and a second virtual character therein, and the first virtual character is a virtual character controlled by the user through the terminal device.
Then, referring to fig. 1b, a skill release deviation function associated with the user and the first virtual appearance, the second virtual appearance, and the element difference in the prompt information generation method provided by the embodiment are obtained; predicting skill release deviation according to a skill release deviation function and the element difference value, wherein the skill release deviation represents the angle deviation of a skill release area of the first virtual character 01 adopting the second virtual appearance relative to the skill release area adopting the first virtual appearance; acquiring a direction 02 of a first virtual character 01 for releasing skills and a target area 03, wherein the target area 03 is a skill release area in the direction 02 when the first virtual character adopts a second virtual appearance; determining a target angle 04, wherein the target angle 04 is an included angle between the boundary of a target area 03 in a game scene and a reference line 05, and the reference line 05 is a connection line between a first virtual character 01 and a second virtual character 06; according to the target angle 04 and the skill release deviation, the target area 03 is subjected to area adjustment to obtain an updated area 07, and the updated area 07 enables the first virtual character 01 to release the skill in the direction 02 to hit the second virtual character 06.
According to the method, when the absolute value of the target angle is smaller than the absolute value of the skill release deviation, the skill release deviation is added on the basis of the target area, so that the range of the target area is adjusted, an updated range is obtained, the area is updated, the first virtual object can hit the second virtual character by the skill release of the second virtual appearance in the direction, the influence of the virtual appearance on the skill release of the first virtual character is reduced, and the user's understanding of the virtual appearance adopted by the virtual character can be improved.
In some embodiments, a prompt message generation method and a region adjustment method may be performed in a game in a terminal device or a server, where the game includes a virtual character controlled by a user through the terminal device. When the control method of the virtual object runs on the server, the method can be implemented and executed based on a cloud interactive system, wherein the cloud interactive system comprises the server and the client device.
In an optional embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud games. Taking a cloud game as an example, a cloud game refers to a game mode based on cloud computing. In the cloud game operation mode, the game program operation main body and the game picture presentation main body are separated, the storage and the operation of the role control method are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a terminal, a television, a computer, a palm computer and the like; however, the terminal device performing role control is a cloud game server in the cloud. When a game is played, a user operates the client device to send an operation instruction, such as an operation instruction of touch operation, to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as a game picture and the like are encoded and compressed and returned to the client device through a network, and finally, the client device decodes the data and outputs the game picture.
In an optional implementation manner, taking a game as an example, the local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with a user through a graphical user interface, namely, downloading and installing a game program and running the game program through the electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the user may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the user by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including a game screen and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
A game scene (or referred to as a virtual scene) is a virtual scene that an application program displays (or provides) when running on a terminal or a server. Optionally, the virtual scene is a simulated environment of the real world, or a semi-simulated semi-fictional virtual environment, or a purely fictional virtual environment. The virtual scene is any one of a two-dimensional virtual scene and a three-dimensional virtual scene, and the virtual environment can be sky, land, sea and the like, wherein the land comprises environmental elements such as deserts, cities and the like. For example, in a sandbox type 3D shooting game, the virtual scene is a 3D game world for the user to control the virtual character to play against, and an exemplary virtual scene may include: at least one element selected from the group consisting of mountains, flat ground, rivers, lakes, oceans, deserts, sky, plants, buildings, and vehicles.
The game interface is an interface corresponding to an application program provided or displayed through a graphical user interface, the interface comprises a graphical user interface and a game picture for interaction of a user, and the game picture is a picture of a game scene.
In alternative embodiments, game controls (e.g., skill controls, behavior controls, functionality controls, etc.), indicators (e.g., directional indicators, character indicators, etc.), information presentation areas (e.g., number of clicks, game play time, etc.), or game settings controls (e.g., system settings, stores, coins, etc.) may be included in the UI interface.
For example, in some embodiments, the prompt information may be included in the graphical user interface as an arbitrary control.
In an optional embodiment, the game screen is a display screen corresponding to a game scene displayed by the terminal device, and the game screen may include virtual objects such as a game object, an NPC character, and an AI character that execute a game logic in the game scene.
For example, in some embodiments, the content displayed in the graphical user interface at least partially comprises a game scene, wherein the game scene comprises at least one game object.
In some embodiments, the game object in the game scene comprises a virtual object manipulated by the user, i.e., the first virtual object.
The game object refers to a virtual object in a virtual scene, including a game character, and the game character is a dynamic object that can be controlled, i.e., a dynamic virtual object. Alternatively, the dynamic object may be a virtual character, a virtual animal, an animation character, or the like. The virtual object is a Character controlled by a user through an input device, or an Artificial Intelligence (AI) set in a virtual environment battle through training, or a Non-virtual Character (NPC) set in a virtual scene battle.
Optionally, the virtual object is a virtual character playing a game in a virtual scene. Optionally, the number of virtual objects in the virtual scene match is preset, or dynamically determined according to the number of clients participating in the match, which is not limited in the embodiment of the present application.
In one possible implementation, the user can control the virtual object to play the game behavior in the virtual scene, and the game behavior can include moving, releasing skills, using props, dialog and the like, for example, controlling the virtual object to run, jump, crawl and the like, and can also control the virtual object to fight with other virtual objects using the skills, virtual props and the like provided by the application program.
The virtual camera is a component necessary for game scene pictures and is used for presenting the game scene pictures, one game scene at least corresponds to one virtual camera, two or more virtual cameras can be used as game rendering windows according to actual needs and are used for capturing and presenting picture contents of a game world for a user, and the view angles of the game world watched by the user, such as a first person name view angle and a third person name view angle, can be adjusted by setting parameters of the virtual camera. In an optional implementation manner, an embodiment of the present invention provides a region adjustment method, where a graphical user interface is provided through a touch terminal, where the touch terminal may be the aforementioned local terminal device, or the aforementioned client device in a cloud interaction system.
The following are detailed below. The numbers in the following examples are not intended to limit the order of preference of the examples.
In this embodiment, a method for generating hint information is provided, and as shown in fig. 1c, a specific flow of the method for generating hint information may be as follows:
110. a first virtual appearance and a second virtual appearance of the virtual character are obtained, and a skill release difference function associated with the user, the second virtual appearance replacing the first virtual appearance.
The first virtual appearance may be a game skin used by the user to control the virtual character, for example, the first virtual appearance may be an initial game skin of the virtual character, a game skin frequently used by the user to control the virtual character, a game skin used by the user to control the virtual character last time, and the like.
The second virtual appearance may be a game skin that the virtual character has not used, for example, the second virtual appearance may be a game skin that the user controls the virtual character to newly use, a game skin that the user will buy, and so on.
The skill release difference function is used for measuring the difference of the accuracy of the skill release when the user controls the virtual character to adopt the second virtual appearance relative to the first virtual appearance.
For example, when the user controls the virtual character to adopt a new game skin, i.e., to adopt the second virtual appearance, it is necessary to acquire the first virtual appearance and the second virtual appearance of the virtual character, and the skill release difference function associated with the user.
For example, when a user wants to buy a game skin, i.e., buy a second virtual appearance, it is necessary to obtain the first virtual appearance and the second virtual appearance of the virtual character, and a skill release difference function associated with the user.
120. A first element and a second element are determined, the first element being a primary visual element in the first virtual appearance and the second element being a primary visual element in the second virtual appearance.
Wherein the first element is a primary visual element of all visual elements constituting the first virtual appearance. For example, the first element may be a primary pixel of all pixels constituting the first virtual appearance, a primary arc of all arcs constituting the first virtual appearance, and so on.
Wherein the second element is a primary visual element of all visual elements constituting the second virtual appearance. For example, the second element may be a primary pixel of all pixels constituting the second virtual appearance, a primary arc of all arcs constituting the second virtual appearance, and so on.
In some embodiments, considering that the visual elements comprise pixels, it is necessary to determine a primary pixel from the pixels constituting the first virtual appearance, the second virtual appearance, respectively, the first element and the second element being determined, the method comprising:
obtaining color values of all pixels of the first virtual appearance and color values of all pixels of the second virtual appearance;
the first element is determined from all pixels of the first virtual appearance and the second element is determined from all pixels of the second virtual appearance according to the number of pixels of the same color value.
And displaying the first virtual appearance on the terminal equipment through the color values of all pixels in the first virtual appearance. For example, the color value may be an RGB value, an HSV value, and so on.
And displaying the second virtual appearance on the terminal equipment through the color values of all pixels in the second virtual appearance. For example, the color value may be an RGB value, an HSV value, and so on.
The first element is a pixel corresponding to a dominant color value of the first virtual appearance, for example, the first element may be a pixel corresponding to a color value with the largest proportion in the first virtual appearance, or may be any one of a plurality of pixels corresponding to color values with larger proportions, and so on.
The second element is a pixel corresponding to a dominant color value of the second virtual appearance, for example, the second element may be a pixel corresponding to a color value with the largest proportion in the second virtual appearance, or any one of a plurality of pixels corresponding to color values with larger proportions, and so on. In some embodiments, determining the first element from all pixels of the first virtual appearance and determining the second element from all pixels of the second virtual appearance according to a number of pixels of the same color value comprises:
determining the color value with the largest occurrence number in the first virtual appearance, and taking the pixel corresponding to the color value with the largest occurrence number as a first element;
the color value with the largest number of occurrences is determined in the second virtual appearance, and the pixel corresponding to the color value with the largest number of occurrences is taken as the second element.
For example, the color values of all the pixels in the first virtual appearance include a color value a, a color value B, a color value C, and a color value D … …, the number of pixels corresponding to the color value a of the first virtual appearance is 100, the number of pixels corresponding to the color value B is 80, and the number of pixels corresponding to the color value C is 60 … …, where the maximum number of pixels corresponding to the color value a takes the pixel corresponding to the color value a as the first element.
In some embodiments, considering that the visual elements comprise arcs, in order to determine the primary visual element from all arcs in the contour constituting the first virtual appearance, the second virtual appearance, the first element and the second element are determined, the method comprising:
acquiring all arcs forming the first virtual appearance outline and all arcs forming the second virtual appearance outline;
the first element is determined from all arcs constituting the first virtual appearance contour and the second element is determined from all arcs constituting the second virtual appearance contour according to the number of arcs of the same arc.
Wherein the first virtual appearance contour is composed of all arcs in the contour of the first virtual appearance. For example, the arc may be any number of degrees from 0 to 360.
Wherein the second virtual appearance contour is composed of all arcs in the contour of the second virtual appearance. For example, the arc may be any number of degrees from 0 to 360.
Wherein the first element may be an arc of the object of the main arc of the first virtual appearance contour. For example, the first element may be an arc corresponding to a maximum arc, or may be any one of a plurality of arcs corresponding to a large arc.
Wherein the second element may be an arc to which the major arc of the second virtual appearance contour is directed. For example, the second element may be an arc corresponding to a maximum arc, or may be any one of a plurality of arcs corresponding to a large arc.
In some embodiments, to determine the primary visual element from all arcs in the contour constituting the first virtual appearance, the second virtual appearance, the first element from all arcs constituting the contour of the first virtual appearance, and the second element from all arcs constituting the contour of the second virtual appearance, according to the number of arcs of the same arc, the method comprises:
determining a first candidate arc from all arcs constituting the first virtual appearance contour and a second candidate arc from all arcs constituting the second virtual appearance contour according to the number of arcs of the same radian;
determining a first element from the first candidate arcs according to the radian of each first candidate arc;
determining a first element from the second candidate arcs according to the radians of all the second candidate arcs. The first candidate arc is an arc corresponding to the first virtual appearance when the number of the same radians meets the preset number. For example, if the preset number is 10, the number of arcs is 44 ° is 11, the number of arcs is 45 ° is 11, and the number of arcs is 46 °, the arcs corresponding to 44 °, 45 °, and 46 ° are taken as the first candidate arcs.
And the second candidate arc is the arc corresponding to the second virtual appearance when the number of the same radians meets the preset number. For example, if the preset number is 10, the number of arcs 30 ° is 11, the number of arcs 32 ° is 14, and the number of arcs 34 ° is 12, the arcs corresponding to 30 °, 32 °, and 34 ° are taken as the second candidate arcs.
For example, the number of arcs having the same radian is determined from all arcs in the first virtual appearance contour, and when the number of arcs having the same radian satisfies a preset number, the arc corresponding to the radian is taken as the first candidate arc, for example, the arcs of the first candidate arc include 44 °, 45 °, and 46 °, so that the average value may be 45 °, and the arc corresponding to 45 ° is taken as the first element, an arc corresponding to one radian is randomly selected from 44 °, 45 °, and 46 ° as the first element, an arc corresponding to a median (45 °) among 44 °, 45 °, and 46 °, and so on.
For example, the number of arcs having the same radian is determined from all arcs in the second virtual appearance profile, when the number of arcs satisfies the preset number, the arc corresponding to the radian is taken as the second candidate arc, for example, the radians of the second candidate arc include 30 °, 32 °, and 34 °, the average value may be 32 °, the arc corresponding to the 32 ° is taken as the second element, the arc corresponding to one radian is randomly selected from 30 °, 32 °, and 34 ° as the second element, the arc corresponding to the median (32 °) of 30 °, 32 °, and 34 °, and so on.
In some embodiments, considering that the visual elements comprise arcs, in order to determine a main arc from all arcs constituting the first virtual appearance contour, the second virtual appearance contour, the first element and the second element are determined, the method comprising:
acquiring all arcs forming the first virtual appearance outline and all arcs forming the second virtual appearance outline;
determining the radian with the largest occurrence number in all radians of arcs of the first virtual appearance outline, and taking the arc corresponding to the radian with the largest occurrence number as a first element;
and determining the radian with the largest occurrence number in all the arc radians of the second virtual appearance outline, and taking the pixel corresponding to the radian with the largest occurrence number as a second element.
For example, if the largest number of all arcs constituting the first virtual appearance contour is 45 °, the arc corresponding to 45 ° is the first element.
For example, the most numerous arc among all arcs constituting the first virtual appearance contour is 32 °, and the arc corresponding to the 32 ° is the second element.
In some embodiments, considering that the visual element may be an arc of a contour constituting a virtual appearance, taking the radians of all arcs in the contour constituting a first virtual appearance, and the radians of all arcs in the contour constituting a second virtual appearance, the method comprises:
acquiring an image with a first virtual appearance, an image with a second virtual appearance and a background image, wherein the image color of the background image is the same as the background color in the image with the first virtual appearance and the image with the second virtual appearance;
splitting the image with the first virtual appearance, the image with the second virtual appearance and the background image into a plurality of sub-images by adopting a preset splitting rule;
determining a first target sub-image from the plurality of first sub-images according to the color difference value between the first sub-image and the background sub-image, wherein the first sub-image is a sub-image in an image with a first virtual appearance, the background sub-image is a sub-image in the background image, and all color difference values corresponding to the first target sub-image comprise a color difference value equal to zero and a color difference value not equal to zero;
determining a second target sub-image from the plurality of second sub-images according to the color difference value between the second sub-image and the background sub-image, wherein the second sub-image is a sub-image in an image with a second virtual appearance, and all color difference values corresponding to the second target sub-image comprise a color difference value equal to zero and a color difference value not equal to zero;
and respectively carrying out angle detection on the corresponding areas of the non-zero color difference values in the first target sub-image and the second target sub-image to obtain the arc radian forming the first virtual appearance outline and the arc radian forming the second virtual appearance outline.
The image of the first virtual appearance comprises the first virtual appearance, and the image of the first virtual appearance can be a front screenshot of the first virtual appearance, a back screenshot of the first virtual appearance, a side screenshot of the first virtual appearance, and the like.
The image of the second virtual appearance may be a front screenshot of the second virtual appearance, a back screenshot of the second virtual appearance, a side screenshot of the second virtual appearance, and the like.
Wherein the color in the background image may be the same as the background color in the image of the first virtual appearance and the image of the second virtual appearance. For example, if the background color in the image with the first virtual appearance and the image with the second virtual appearance are pure white, the color in the background image is also pure white, and if the background color in the image with the first virtual appearance and the image with the second virtual appearance is a gradient color, the color in the background image is also the same as the gradient color, and so on.
The preset splitting rule is a rule for splitting the image into a plurality of sub-images, for example, the preset splitting rule may split the image into a plurality of sub-images of the same size, or may split the image into sub-images of different sizes, where the sub-images of the first virtual-appearance image, the sub-images of the second virtual-appearance image, and the sub-images of the background image have the same and corresponding sizes.
The background sub-image is a sub-image obtained by splitting the background image. For example, if the background image is a solid image, the color of each background sub-image is the same. For example, if the background image is a gradient color, after the background sub-images are merged and restored, the color in the background image may correspond to the background color in the first virtual-appearance image and the background color in the second virtual-appearance image.
The first sub-image is obtained by splitting the image with the first virtual appearance.
The second sub-image is a sub-image obtained by splitting the image with the second virtual appearance.
The color difference between the first sub-image and the background sub-image is the color value of the pixel in the first sub-image minus the color value of the pixel in the background sub-image. For example, if the first sub-image comprises an arc in the contour of the first virtual appearance, the color difference value between the first sub-image and the background sub-image comprises a value with a color value of zero and a value different from zero. If the first sub-image constitutes the inside of the first virtual appearance and does not comprise an arc in the contour of the first virtual appearance, the color difference between the first sub-image and the background sub-image comprises only values not equal to zero.
The color difference values corresponding to the first target sub-image include a color difference value equal to zero and a color difference value different from zero.
The color difference between the second sub-image and the background sub-image is the color value of the pixel in the second sub-image minus the color value of the pixel in the background sub-image. For example, if the second sub-image comprises an arc in the contour of the second virtual appearance, the color difference value between the second sub-image and the background sub-image comprises a value with a color value of zero and a value different from zero. If the second sub-image constitutes the inside of the second virtual appearance and does not comprise an arc in the contour of the second virtual appearance, the color difference between the second sub-image and the background sub-image comprises only values not equal to zero.
The color difference values corresponding to the second target sub-images include a color difference value equal to zero and a color difference value different from zero.
For example, when the background image is a pure color image, the splitting rule is used to split the image into a plurality of sub-images with the same size, the first sub-image, the second sub-image and the background sub-image have the same size, subtract the color value of the pixel in the background sub-image from the color value of the pixel in the first sub-image, if the color difference includes a value of zero and a value of non-zero, the first sub-image is a first target sub-image containing an arc, perform angle detection on a region corresponding to the value of non-zero in the first target sub-image, obtain an arc radian in the contour of the first virtual appearance, subtract the color value of the pixel in the second sub-image from the color value of the pixel in the background sub-image, if the color difference includes a value of zero and a value of non-zero, the second sub-image is a second target sub-image containing an arc, and perform angle detection on a region corresponding to the value of non-zero in the second target sub-image, the arc of the arc in the profile of the second virtual appearance is obtained.
For example, when the background image is a gradient image, the splitting rule is used to split the image into a plurality of sub-images with the same size, and then the sizes of the first sub-image, the second sub-image and the background sub-image are the same, the color value of the pixel in the first sub-image is subtracted by the color value of the pixel in the corresponding background sub-image, and the color value of the pixel in the second sub-image is subtracted by the color value of the pixel in the corresponding background sub-image, and the positions of the first sub-image, the second sub-image and the corresponding background sub-image in the image are the same.
For example, when the background image is a pure color image or a gradient image, the splitting rule is used to split the image into a plurality of sub-images with different sizes, and then the color value of the pixel in the first sub-image is subtracted by the color value of the pixel in the corresponding background sub-image, and the color value of the pixel in the second sub-image is subtracted by the color value of the pixel in the corresponding background sub-image, and the positions of the first sub-image and the second sub-image in the image are the same as the positions of the corresponding background sub-image.
130. Determining an element difference value, wherein the element difference value is a numerical difference value between the second element and the first element.
Where the element difference is a numerical difference between the primary visual element in the second virtual appearance and the primary visual element in the first virtual appearance, e.g., where the visual element is a pixel, the element difference is a color difference between the primary pixel in the second virtual appearance and the primary pixel in the first virtual appearance, where the visual element is an arc, the element difference is an angle difference between the primary arc in the second virtual appearance and the primary arc in the first virtual appearance, and so on.
In some embodiments, the primary visual element comprises a pixel and the element difference is a color difference of the pixel.
In some embodiments, the primary visual element further comprises an arc, the element difference being an angular difference of the arc.
In some embodiments, in order to measure the relationship between the accuracy of the skill release associated with the user and the visual elements, before obtaining the skill release difference function associated with the user, the method further includes:
obtaining a historical virtual appearance of the virtual character, wherein the historical virtual appearance is a second virtual appearance adopted by the virtual character in historical time;
determining a history element difference value, wherein the history element difference value is a numerical difference value between a main visual element and a first element in the history virtual appearance;
determining historical skill release difference, wherein the historical skill release difference is the precision difference of skill release when the virtual character adopts the historical virtual appearance relative to the skill release when the virtual character adopts the first virtual appearance;
and constructing a skill release difference function according to the historical skill release difference and the historical element difference.
The historical virtual appearance may be a game skin used by the user to control the virtual character at the historical time, for example, the historical virtual appearance may be a game skin used by the user to control the virtual character, and the historical virtual appearance is two different virtual appearances from the first virtual appearance.
Wherein the historical time is a time prior to obtaining the second virtual appearance of the virtual character. For example, the historical time may be the time before the user controlled the avatar to assume the second virtual appearance, the time before the user purchased the second virtual appearance, and so on.
Wherein the primary visual element in the historical virtual appearance is a primary visual element of all visual elements that make up the historical virtual appearance. For example, the primary visual element in the historical virtual appearance may be a primary pixel in all pixels that make up the historical virtual appearance, a primary arc in all arcs that make up the historical virtual appearance, and so on.
Wherein the historical element difference is a numerical difference between a primary visual element in the historical virtual appearance relative to a primary visual element in the first virtual appearance. For example, the history element difference may be a pixel difference between a primary pixel in the history virtual appearance and a primary pixel in the first virtual outer, an arc difference between a primary arc in the history virtual appearance and a primary arc in the first virtual outer, and so on.
The historical skill release difference is a precision difference between the skill release when the virtual character adopts the historical virtual appearance and the skill release when the virtual character adopts the first virtual appearance, for example, the precision of the skill release when the virtual character can adopt the historical virtual appearance is 0.6, the precision of the skill release when the virtual character can adopt the first virtual appearance is 0.5, and the historical skill release difference is 0.1.
Wherein the skill release difference function may characterize a relationship between a historical pixel difference value and a historical skill release difference, wherein the historical pixel difference value is a color value difference between a primary pixel in the historical virtual appearance and a primary pixel in the first virtual appearance.
The skill release difference function may also characterize a relationship between a historical radian difference value and a historical skill release difference, wherein the historical radian difference value is a radian difference between a primary arc in the historical virtual appearance and a primary arc in the first virtual appearance.
In some embodiments, the method comprises determining the historical skill release difference in view of a difference between a precision of skill release required by the virtual character when assuming the historical virtual appearance and a precision of skill release required by the virtual character when assuming the first virtual appearance when constructing the skill release difference function associated with the user, the method comprising:
acquiring the total number and the number of hits of the released skills when the virtual character adopts a first virtual appearance, and acquiring the total number and the number of hits of the released skills when the virtual character adopts a historical virtual appearance;
determining the precision of the skill release when the virtual character adopts the first virtual appearance according to the ratio of the number of hits of the skill release when the virtual character adopts the first virtual appearance to the total number;
and determining the precision of the skill release when the virtual character adopts the historical virtual appearance according to the ratio of the number of hits of the skill release when the virtual character adopts the historical virtual appearance to the total number.
Determining a historical skill release difference according to a ratio of the precision of the skill release when the virtual character adopts the historical virtual appearance to the precision of the skill release when the virtual character adopts the first virtual appearance.
For example, the total number of skill releases when the virtual character adopts the first virtual appearance is the total number of skill releases when the user controls the virtual character to adopt the first virtual appearance to attack the enemy virtual character.
The number of hits released by the virtual character when adopting the first virtual appearance is the number of hits released by the user to control the enemy virtual character when adopting the first virtual appearance.
The total number of the released skills when the virtual character adopts the historical virtual appearance is the total number of the released skills when the user controls the virtual character to adopt the historical virtual appearance and attacks the virtual character of the enemy.
The number of hits released by the virtual character when adopting the historical virtual appearance is the number of hits released by the user to control the enemy virtual character when adopting the historical virtual appearance.
The accuracy of the release of the skill when the virtual character adopts the first virtual appearance is a value obtained by dividing the number of hits of the release of the skill when the virtual character adopts the first virtual appearance by the total number of the releases of the skill when the virtual character adopts the first virtual appearance. For example, if the number of hits is 10 and the total number is 50, the accuracy is 0.2.
The accuracy of the release of the skills of the virtual character adopting the historical virtual appearance is a value obtained by dividing the number of hits of the release of the skills of the virtual character adopting the historical virtual appearance by the total number of the releases of the skills of the virtual character adopting the historical virtual appearance.
140. And predicting skill release difference according to the skill release difference function and the element difference value, wherein the skill release difference represents the difference of the accuracy of skill release of the virtual character adopting the second virtual appearance relative to the accuracy of skill release of the virtual character adopting the first virtual appearance.
Wherein, the skill release difference is a value obtained by substituting the element difference value into the skill release difference function.
For example, the skill release difference function is used to represent a functional relationship between a color difference value of a pixel and a skill release difference, and if the element difference value is the color difference value of the pixel, the color difference value of the pixel may be substituted into the skill release difference function, and the skill release difference corresponding to the color difference value of the pixel may be found from the skill release difference function.
For example, the skill release difference function is used to represent a functional relationship between an angle difference of an arc and a skill release difference, and if the element difference is the angle difference of the arc, the angle difference of the arc may be substituted into the skill release difference function, so that the skill release difference corresponding to the angle difference of the arc may be found from the skill release difference function.
In some embodiments, considering that the visual elements may include pixels and arcs, the element difference values may be a color difference value of the pixels and an angle difference value of the arcs, and the skill release difference is predicted in two ways, the skill release difference function includes a first function and a second function, and the skill release difference is predicted according to the skill release difference function and the element difference values, the method includes:
predicting a first skill release difference according to the first function and the color difference value of the pixel;
predicting a second skill release difference according to the second function and the angle difference of the arc;
the first skill release difference and the second skill release difference are taken as skill release differences.
Wherein the first function is used for characterizing the functional relationship between the color difference value of the pixel and the skill release difference.
Wherein the first skill release difference is a skill release difference associated with a color difference value of the pixel.
Wherein the second function is used for representing the functional relation between the angle difference of the arc and the skill release difference.
Wherein the second skill release difference is a skill release difference associated with the angular difference of the arc.
150. And generating prompt information according to the skill release difference so as to prompt the user.
The prompt information is prompt information of skill release difference associated with the user, for example, the prompt information may be displayed on a purchase interface of the second virtual appearance, may be displayed in a game interface of the new second virtual appearance adopted by the virtual character, may be displayed when the virtual character adopts the second virtual appearance to release the skill, and the like.
It is understood that, in the specific implementation of the present application, data related to the first virtual appearance, the historical virtual appearance, and the like adopted by the user to control the virtual character, when the above embodiments of the present application are applied to specific products or technologies, user permission or consent needs to be obtained, and the collection, use, and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant countries and regions.
As can be seen from the above, the embodiment of the present application may be applied to a game, where the game includes virtual characters, and the virtual characters are controlled by a user through a terminal device, and the method includes: obtaining a first virtual appearance and a second virtual appearance of the virtual character, and a skill release difference function associated with the user, the second virtual appearance being used to replace the first virtual appearance; determining a first element and a second element, the first element being a primary visual element in the first virtual appearance and the second element being a primary visual element in the second virtual appearance; determining an element difference value, wherein the element difference value is a numerical difference value between the second element and the first element; predicting skill release difference according to the skill release difference function and the element difference value, wherein the skill release difference represents the precision difference of skill release of the virtual character adopting the second virtual appearance relative to the skill release of the virtual character adopting the first virtual appearance; and generating prompt information according to the skill release difference so as to prompt the user.
According to the scheme, the main visual elements, namely the first elements, in the first virtual appearance can be determined, the main visual elements, namely the second elements, in the second virtual appearance are determined, the element difference values are determined according to the numerical difference values between the second elements and the first elements, the element difference values can reflect the difference between the main visual elements of the two virtual appearances, the skill release difference between the two virtual appearances associated with the user is predicted according to the element difference values and the skill release difference function associated with the user, and the prompt information prompting the user is generated according to the skill release difference, so that the user can know the precision difference of the skill release between the first virtual appearances when the virtual character adopts the second virtual appearance, and the understanding of the virtual appearance adopted by the user is improved.
In this embodiment, a region adjustment method is provided, where a terminal device provides a graphical user interface, where content of the graphical user interface at least partially includes a game scene and a first virtual character and a second virtual character therein, where the first virtual character is a virtual character controlled by a user through the terminal device, and as shown in fig. 2a, a specific flow of the region adjustment method may be as follows:
210. the skill release deviation function associated with the user and the first virtual appearance, the second virtual appearance and the element difference value in any prompt information generation method provided by the embodiment of the application are obtained.
The skill release deviation function is used for measuring the angle deviation of the skill release area when the user controls the first virtual character to adopt the second virtual appearance relative to the skill release area when the user adopts the first virtual appearance.
The skill release area is an area where the skill release of the first virtual character in the virtual appearance hits the second virtual character. For example, the second virtual character is directly in front of the first virtual character, the first virtual character releases the skill to the second virtual character when adopting the first virtual appearance, and the skill released by the first virtual character can hit the second virtual character as long as the direction of releasing the skill is within the skill release area.
In some embodiments, the skill release area is one of fan-shaped, circular, and rectangular in shape.
The relationship between the second virtual character and the first virtual character is an adversary relationship, for example, the second virtual character may be an enemy virtual character, or an NPC in a game that may harm a player, and so on.
In some embodiments, in view of the fact that the users have different sensitivities to different visual elements, that is, the users have different game effects when controlling the first virtual character to adopt different virtual appearances, the game effects may specifically affect the skill release area of the first virtual character, in order to measure the relationship between the skill release area associated with the users and the visual elements, before obtaining the skill release bias function associated with the users, the method further includes:
acquiring a difference value between a historical virtual appearance and a historical element in the prompt information generation method provided by the embodiment of the application;
determining historical skill release deviation, wherein the historical skill release deviation is the angle deviation of a skill release area when the first virtual character adopts the historical virtual appearance relative to the skill release area when the first virtual appearance is adopted;
for example, the second virtual character is directly in front of the first virtual character, and 0 ° is directly in front, the skill release region where the first virtual character hits the second virtual character with the historical virtual appearance is in the range of (-5 ° +5 °), and the skill release region where the first virtual character hits the second virtual character with the first virtual appearance is in the range of (-3 ° +3 °), and the historical skill release deviation is-2 ° and +2 °.
And constructing a skill release deviation function according to the historical skill release deviation and the historical element difference value.
For example, the skill release bias function may characterize a relationship between a historical pixel difference value and a historical skill release bias, where the historical pixel difference value is a color value difference between a primary pixel in the historical virtual appearance and a primary pixel in the first virtual appearance.
The skill release deviation function may also characterize a relationship between a historical radian difference value and a historical skill release difference, wherein the historical radian difference value is a radian difference between a primary arc in the historical virtual appearance and a primary arc in the first virtual appearance.
220. And predicting skill release deviation according to the skill release deviation function and the element difference value, wherein the skill release deviation represents the angle deviation of the skill release area when the first virtual character adopts the second virtual appearance relative to the skill release area when the first virtual appearance is adopted.
The skill release deviation is a value obtained by substituting the element difference value into a skill release deviation function.
For example, the skill release deviation function is used to represent a functional relationship between a color difference value of a pixel and a skill release deviation, where an element difference value is a color difference value of a pixel, the color difference value of the pixel may be substituted into the skill release deviation function, and a skill release deviation corresponding to the color difference value of the pixel may be found from the skill release deviation function.
For example, the skill release deviation function is used for representing a functional relationship between the angle difference of the arc and the skill release deviation, and the element difference is the angle difference of the arc, so that the angle difference of the arc can be substituted into the skill release deviation function, and the skill release deviation corresponding to the angle difference of the arc can be found from the skill release deviation function.
For example, as shown in fig. 2B, the skill release deviation characterizes an angular deviation C of the skill release area a when the first virtual character adopts the second virtual appearance relative to the skill release area B when the first virtual appearance is adopted.
230. And obtaining the skill releasing direction in the game scene when the first virtual character adopts the second virtual appearance and a target area, wherein the target area is the skill releasing area in the direction when the first virtual character adopts the second virtual appearance.
For example, when a first virtual character releases skills to a second virtual character, a direction in which the first virtual character releases skills may be acquired.
The target area is a skill releasing area in the direction when the first virtual character adopts the second virtual appearance.
240. And determining a target angle, wherein the target angle is an included angle between the boundary of a target area in the game scene and a reference line, and the reference line is a connection line between the first virtual character and the second virtual character.
The target angle is an included angle between a boundary of the target area in the game scene and the reference line, for example, when the target angle is in a clockwise direction of the reference line, the target angle is a positive value, and when the target angle is in an anticlockwise direction of the reference line, the target angle is a negative value.
The datum line is a connecting line between the first virtual character and the second virtual character, and the angle of the connecting line is zero.
In some embodiments, in order to be able to determine the distance between the target area and the second virtual character, a target angle is determined, the method comprising:
determining a first angle, wherein the first angle is an included angle between the direction and a datum line;
determining a second angle, wherein the second angle is an included angle between the direction and the boundary of the target area;
a target angle is determined based on a difference between the first angle and the second angle.
Wherein the first character is used to characterize the distance between the direction and the reference line. For example, a direction of 0 ° and a reference line of 10 ° counterclockwise of the direction, the first angle is-10 °.
The second persona is used to characterize the distance between the direction and the boundary of the target area. For example, if the direction is 0 °, the first boundary of the target region is 7 ° counterclockwise of the direction, the second angle corresponding to the first boundary of the target region is-7 °, the second boundary of the target region is 7 ° clockwise of the direction, and the second angle corresponding to the second boundary of the target region is +7 °.
The target angle is an angle between a boundary of the target region adjacent to the reference line and the reference line. For example, the first angle is-10 °, the second angle corresponding to the first boundary of the target region is-7 °, then (-10 °) (-7 °) -3 °, the second angle corresponding to the second boundary of the target region is +7 °, then (-10 °) (+7 °) is-17 °, and the absolute value of-3 is less than the absolute value of-17, then the target angle is-3 °.
250. And according to the target angle and the skill release deviation, performing region adjustment on the target region to obtain an updated region, and updating the region to enable the first virtual character to release the skill in the direction to hit the second virtual character.
For example, when the target angle is-3 °, the skill release deviation corresponding to the first virtual character adopting the second virtual appearance is-3 ° and +3 °, when the absolute value of the target angle is smaller than the absolute value of the skill release deviation, the target area is enlarged within the range of the skill release deviation of-3 ° and +3 °, that is, the target area is increased by-3 ° in the counterclockwise direction + of the direction at the first boundary, the target area is increased by 3 ° in the clockwise direction at the second boundary, an updated area is obtained, and the updated area is within the range of (-10 ° _ +10 °), so that the first virtual character can hit the second virtual character which is not in the target area in the direction.
As can be seen from the above, in the embodiment of the present application, the skill release deviation function associated with the user is obtained, and the first virtual appearance, the second virtual appearance, and the element difference value in the prompt information generation method provided by the embodiment are obtained; predicting skill release deviation according to a skill release deviation function and the element difference value, wherein the skill release deviation represents the angle deviation of a skill release area of the first virtual character in the second virtual appearance relative to the skill release area in the first virtual appearance; the method comprises the steps of obtaining a skill releasing direction in a game scene when a first virtual character adopts a second virtual appearance and a target area, wherein the target area is a skill releasing area in the direction when the first virtual character adopts the second virtual appearance; determining a target angle, wherein the target angle is an included angle between a boundary of a target area in a game scene and a reference line, and the reference line is a connection line between the position of a first virtual character and the position of a second virtual character; and according to the target angle and the skill release deviation, performing region adjustment on the target region to obtain an updated region, and updating the region to enable the first virtual character to release the skill in the direction to hit the second virtual character.
According to the scheme, the angle deviation of the skill release area between two virtual appearances associated with the user can be predicted through the element difference value and the skill release deviation function associated with the user, namely the skill release deviation is predicted, meanwhile, the direction of the first virtual character for releasing the skill and the target area are obtained, the target area is the skill release area in the direction when the first virtual character adopts the second virtual appearance, the included angle between the boundary of the target area and the reference line is measured, the reference line is the connecting line between the first virtual character and the second virtual character, when the absolute value of the target angle is smaller than the absolute value of the skill release deviation, the skill release deviation is added on the basis of the target area, so that the area of the target area is adjusted, a updated area is obtained, and the second virtual character can be hit by the skill of the first virtual object adopting the second virtual appearance for releasing in the direction, therefore, the skill release of the first virtual character influenced by the virtual appearance is reduced, and the user can learn about the virtual appearance adopted by the virtual character.
The method described in the above embodiments is further described in detail below.
In this embodiment, a detailed description will be given of a prompt information generation method according to an embodiment of the present application, taking the construction of a skill release difference function associated with a user as an example.
The method comprises the following specific processes:
the method comprises the steps of (I) obtaining the precision of the skill release of the virtual character adopting the first virtual appearance.
And secondly, acquiring the precision released by the skill of the virtual character adopting the historical virtual appearance.
And thirdly, determining historical skill release difference, wherein the historical skill release difference is the precision difference of skill release when the virtual character adopts the historical virtual appearance relative to the skill release when the virtual character adopts the first virtual appearance.
And (IV) acquiring the color values of all the pixels in the first virtual appearance and the color values of the pixels in the historical virtual appearance.
And (V) taking the pixel corresponding to the color value with the largest occurrence number in the first virtual appearance as a first element, and taking the pixel corresponding to the color value with the largest occurrence number in the historical virtual appearance as a main visual element in the historical virtual appearance.
And (VI) acquiring the radian of all arcs in the outline forming the first virtual appearance and the radian of all arcs in the outline forming the second virtual appearance.
And (seventh), taking the arc corresponding to the radian with the largest number in the first virtual appearance as a first element, and taking the arc corresponding to the radian with the largest number in the historical virtual appearances as a main visual element in the historical virtual appearances.
In some embodiments, obtaining the radians of all of the arcs in the contour that make up the first virtual appearance and the radians of all of the arcs in the contour that make up the second virtual appearance comprises:
acquiring an image with a first virtual appearance, an image with a second virtual appearance and a background image, wherein the image color of the background image is the same as the background color in the image with the first virtual appearance and the image with the second virtual appearance;
splitting the image with the first virtual appearance, the image with the second virtual appearance and the background image into a plurality of sub-images by adopting a preset splitting rule;
determining a first target sub-image from the plurality of first sub-images according to the color difference between the first sub-image and the background sub-image, and determining a second target sub-image comprising an arc according to the color difference between the second sub-image and the background sub-image, wherein all color differences corresponding to the first target sub-image and the second target sub-image comprise a color difference equal to zero and a color difference not equal to zero;
and respectively carrying out angle detection on the corresponding areas of the color difference value which is not zero in the first target sub-image and the second target sub-image to obtain the arc radian forming the first virtual appearance outline and the arc radian forming the second virtual appearance outline.
(eight) determining a historical element difference value, wherein the historical element difference value is a numerical value difference value between a main visual element and a first element in the historical virtual appearance.
And (ninthly), constructing a skill release difference function according to the historical skill release difference and the historical element difference.
(ten), determining an element difference value, wherein the element difference value is a numerical difference value between the primary visual element in the second virtual appearance and the first element;
predicting skill release difference according to the skill release difference function and the element difference value;
and generating prompt information according to the skill release difference to prompt the user that the precision difference of the skill release of the virtual character in the second virtual appearance relative to the skill release in the first virtual appearance is different.
(thirteen), acquiring a preset skill release difference;
and (fourteen) determining a predicted element difference value according to the skill release difference function and the preset skill release difference so as to provide reference for designing a new second virtual appearance.
Therefore, the knowledge of the virtual appearance adopted by the virtual character can be improved for the user, and a reference can be provided for designing a new second virtual appearance.
The method described in the above embodiments is further described in detail below.
In this embodiment, a detailed description will be given of a region adjustment method according to an embodiment of the present application, taking the construction of a skill release bias function associated with a user as an example.
The method comprises the following specific processes:
the method comprises the steps of (I) acquiring a skill release area of a first virtual character in a first virtual appearance under different angles.
For example, when the character is oriented at 0 °, the area of the history data in which the first avatar releases skills with the first virtual appearance ranges from-5 ° to +5 °.
And (II) acquiring the skill release area with the same angle and the historical virtual appearance adopted by the first virtual character under different virtual appearances.
For example, when the character is oriented to 0 °, the first avatar adopts an area with historical virtual appearance release skills ranging from-7 ° to +7 ° in all historical data.
And (III) calculating historical skill release deviation, wherein the historical skill release deviation is the angle deviation between the skill release area of the first virtual character adopting the historical virtual appearance and the skill release area of the first virtual character adopting the first virtual appearance.
And (IV) correlating the historical skill release deviation with the historical element difference in the prompt information generation method of the embodiment of the application to obtain a skill release deviation function.
And fifthly, when the user controls the first virtual role to adopt the second virtual appearance, acquiring an element difference value in the prompt message generation method of the embodiment of the application.
And (VI) predicting skill release deviation according to the skill release deviation function and the element difference value.
And (seventhly), acquiring the skill releasing direction in the game scene when the first virtual character adopts the second virtual appearance, and acquiring a target area, wherein the target area is the skill releasing area in the direction when the first virtual character adopts the second virtual appearance.
And (eighthly), determining a target angle, wherein the target angle is an included angle between the boundary of the target area in the game scene and a reference line, and the reference line is a connection line between the first virtual character and the second virtual character.
And (ninthly), according to the target angle and the skill release deviation, performing region adjustment on the target region to obtain an updated region, and updating the region to enable the first virtual character to release the skill in the direction to hit the second virtual character.
Therefore, the skill of the first virtual object adopting the second virtual appearance and releasing in the direction can hit the second virtual character, so that the influence of the virtual appearance on the skill release of the first virtual character is reduced, and the user can be better informed of the virtual appearance adopted by the virtual character.
In order to implement the above method better, an embodiment of the present application further provides a prompt information generating apparatus, where the prompt information generating apparatus may be specifically integrated in an electronic device, and the electronic device may be a terminal, a server, or other devices. The terminal can be a mobile phone, a tablet computer, an intelligent Bluetooth device, a notebook computer, a personal computer and other devices; the server may be a single server or a server cluster composed of a plurality of servers.
For example, in the present embodiment, the method of the present embodiment will be described in detail by taking an example in which the prompt information generating device is specifically integrated in the electronic device.
For example, as shown in fig. 3, the hint information generation apparatus may include a first acquisition unit 310, an element determination unit 320, a difference determination unit 330, a difference prediction unit 340, and a hint unit 350, as follows:
first, a first obtaining unit 310.
An obtaining unit 310 is configured to obtain a first virtual appearance and a second virtual appearance of the virtual character, and a skill release difference function associated with the user, the second virtual appearance being used to replace the first virtual appearance.
In some embodiments, prior to obtaining the skill release difference function associated with the user, the method further comprises:
obtaining a historical virtual appearance of the virtual character, wherein the historical virtual appearance is a second virtual appearance adopted by the virtual character in historical time;
determining a history element difference value, wherein the history element difference value is a numerical difference value between a main visual element and a first element in the history virtual appearance;
determining historical skill release difference, wherein the historical skill release difference is the precision difference of skill release when the virtual character adopts the historical virtual appearance relative to the skill release when the virtual character adopts the first virtual appearance;
and constructing an objective function according to the historical skill release difference and the historical element difference.
In some embodiments, a historical skill release variance is determined, the method comprising:
acquiring the total number and the number of hits of the released skills when the virtual character adopts a first virtual appearance, and acquiring the total number and the number of hits of the released skills when the virtual character adopts a historical virtual appearance;
determining the precision of the skill release when the virtual character adopts the first virtual appearance according to the ratio of the number of hits of the skill release when the virtual character adopts the first virtual appearance to the total number;
and determining the precision of the skill release when the virtual character adopts the historical virtual appearance according to the ratio of the number of hits of the skill release when the virtual character adopts the historical virtual appearance to the total number.
Determining a historical skill release difference according to a ratio of the precision of the skill release when the virtual character adopts the historical virtual appearance to the precision of the skill release when the virtual character adopts the first virtual appearance.
And (ii) an element determination unit 320.
An element determining unit 320 for determining a first element being a dominant visual element in the first virtual appearance and a second element being a dominant visual element in the second virtual appearance.
In some embodiments, the visual element comprises a pixel, the first element and the second element are determined, the method comprising:
obtaining color values of all pixels of the first virtual appearance and color values of all pixels of the second virtual appearance;
determining a first element from all of the pixels of the first virtual appearance and a second element from all of the pixels of the second virtual appearance according to the number of the pixels of the same color value.
In some embodiments, the visual element comprises an arc, the first element and the second element are determined, and the method comprises:
acquiring all arcs forming the first virtual appearance outline and all arcs forming the second virtual appearance outline;
the first element is determined from all arcs constituting the first virtual appearance contour and the second element is determined from all arcs constituting the second virtual appearance contour according to the number of arcs of the same arc.
In some embodiments, the first element is determined from all arcs making up the first virtual appearance contour and the second element is determined from all arcs making up the second virtual appearance contour according to the number of arcs of the same arc, the method comprising:
determining a first candidate arc from all of the arcs making up the first virtual appearance profile and a second candidate arc from all of the arcs making up the second virtual appearance profile according to the number of arcs of the same arc;
determining a first element from the first candidate arcs according to the radian of each first candidate arc;
determining a second element from each second candidate arc according to the radian of the second candidate arc.
In some embodiments, the radians of all arcs making up the first virtual appearance profile and the radians of all arcs making up the second virtual appearance profile are obtained, the method comprising:
acquiring an image with a first virtual appearance, an image with a second virtual appearance and a background image, wherein the image color of the background image is the same as the background color in the image with the first virtual appearance and the image with the second virtual appearance;
splitting the image with the first virtual appearance, the image with the second virtual appearance and the background image into a plurality of sub-images by adopting a preset splitting rule;
determining a first target sub-image from the plurality of first sub-images according to the color difference value between the first sub-image and the background sub-image, wherein the first sub-image is a sub-image in an image with a first virtual appearance, the background sub-image is a sub-image in the background image, and all color difference values corresponding to the first target sub-image comprise a color difference value equal to zero and a color difference value not equal to zero;
determining a second target sub-image from the plurality of second sub-images according to the color difference value between the second sub-image and the background sub-image, wherein the second sub-image is a sub-image in an image with a second virtual appearance, and all color difference values corresponding to the second target sub-image comprise a color difference value equal to zero and a color difference value not equal to zero;
and respectively carrying out angle detection on the corresponding areas of the color difference value which is not zero in the first target sub-image and the second target sub-image to obtain the arc radian forming the first virtual appearance outline and the arc radian forming the second virtual appearance outline.
And (iii) a difference value determination unit 330.
A difference determination unit 330, configured to determine an element difference, where the element difference is a numerical difference between the second element and the first element.
In some embodiments, the primary visual element comprises a pixel and the element difference is a color difference of the pixel.
In some embodiments, the primary visual element further comprises an arc, the element difference being an angular difference of the arc.
(IV), a difference prediction unit 340.
And the difference prediction unit 340 is configured to predict a skill release difference according to the skill release difference function and the element difference value, where the skill release difference represents a difference in precision of the skill release of the virtual character in the second virtual appearance relative to the skill release in the first virtual appearance.
In some embodiments, the skill release difference function comprises a first function and a second function, the skill release difference is predicted from the skill release difference function and the element difference value, the method comprising:
predicting a first skill release difference according to the first function and the color difference value of the pixel;
predicting a second skill release difference according to the second function and the angle difference of the arc;
the first skill release difference and the second skill release difference are taken as skill release differences.
(V) a prompt unit 350.
And a prompt unit 350, configured to generate prompt information according to the skill release difference, so as to prompt the user.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
As can be seen from the above, the prompt information generation apparatus of the present embodiment acquires, by the acquisition unit, the first virtual appearance and the second virtual appearance of the virtual character, the second virtual appearance being used to replace the first virtual appearance, and the skill release difference function associated with the user; determining, by an element determination unit, a first element and a second element, the first element being a primary visual element in a first virtual appearance and the second element being a primary visual element in a second virtual appearance; determining, by a difference determination unit, an element difference, the element difference being a numerical difference between the second element and the first element; predicting skill release difference by a difference prediction unit according to a skill release difference function and an element difference value, wherein the skill release difference represents the precision difference of skill release of the virtual character adopting the second virtual appearance relative to the skill release of the virtual character adopting the first virtual appearance; and generating prompt information by a prompt unit according to the skill release difference so as to prompt the user.
Therefore, the embodiment of the application can improve the understanding of the user on the virtual appearance adopted by the virtual character.
In order to better implement the method, an embodiment of the present application further provides an area adjusting apparatus, where the area adjusting apparatus may be specifically integrated in an electronic device, and the electronic device may be a terminal, a server, or the like. The terminal can be a mobile phone, a tablet computer, an intelligent Bluetooth device, a notebook computer, a personal computer and other devices; the server may be a single server or a server cluster composed of a plurality of servers.
For example, in this embodiment, the method of the embodiment of the present application will be described in detail by taking an example in which the area adjustment device is specifically integrated in the electronic device.
For example, as shown in fig. 4, the area adjusting apparatus provides a graphical user interface through a terminal device, the content of the graphical user interface at least partially includes a game scene and a first virtual character and a second virtual character therein, the first virtual character is a virtual character controlled by a user through the terminal device, and the apparatus may include a second obtaining unit 410, a deviation determining unit 420, a third obtaining unit 430, an angle determining unit 440, and an area adjusting unit 450, as follows:
first, second obtaining unit 410.
The second obtaining unit 410 is configured to obtain a skill release deviation function associated with a user, and a first virtual appearance, a second virtual appearance, and an element difference in any one of the prompt information generation methods provided in the embodiments of the present application.
In some embodiments, prior to obtaining the skill release bias function associated with the user, the method further comprises:
acquiring a difference value between a historical virtual appearance and a historical element in any prompt information generation method provided by the embodiment of the application;
determining historical skill release deviation, wherein the historical skill release deviation is the angle deviation of a skill release area when the first virtual character adopts the historical virtual appearance relative to the skill release area when the first virtual appearance is adopted;
and constructing a skill release deviation function according to the historical skill release deviation and the historical element difference value.
And (ii) a deviation determination unit 420.
And the deviation determining unit 420 is configured to predict a skill release deviation according to the skill release deviation function and the element difference value, where the skill release deviation represents an angle deviation of a skill release area when the first virtual character adopts the second virtual appearance relative to the skill release area when the first virtual appearance is adopted.
And (iii) a third obtaining unit 430.
A third obtaining unit 430, configured to obtain a direction of releasing skill in the game scene when the first virtual character adopts the second virtual appearance, and a target area, where the target area is an area of releasing skill in the direction when the first virtual character adopts the second virtual appearance.
(IV), an angle determining unit 440.
The angle determining unit 440 is configured to determine a target angle, where the target angle is an included angle between a boundary of a target area in a game scene and a reference line, and the reference line is a connection line between the first virtual character and the second virtual character.
In some embodiments, a target angle is determined, the method comprising:
determining a first angle, wherein the first angle is an included angle between the direction and a datum line;
determining a second angle, wherein the second angle is an included angle between the direction and the boundary of the target area;
a target angle is determined based on a difference between the first angle and the second angle.
(V) a zone adjustment unit 450.
The area adjusting unit 450 is configured to perform area adjustment on the target area according to the target angle and the skill release deviation to obtain an updated area, and the updated area enables the first virtual character to release the skill in the direction to hit the second virtual character.
As can be seen from the above, in the embodiment, the second obtaining unit obtains the skill release deviation function associated with the user, and the first virtual appearance, the second virtual appearance and the element difference value in any one of the prompt information generating methods provided in the embodiments of the present application; predicting skill release deviation by a deviation determining unit according to a skill release deviation function and the element difference value, wherein the skill release deviation represents the angle deviation of a skill release area of the first virtual character when adopting a second virtual appearance relative to the skill release area when adopting a first virtual appearance; the third obtaining unit obtains the direction of releasing the skill in the game scene when the first virtual character adopts the second virtual appearance, and a target area, wherein the target area is the area of releasing the skill in the direction when the first virtual character adopts the second virtual appearance; determining a target angle by an angle determining unit, wherein the target angle is an included angle between a boundary of a target area in a game scene and a reference line, and the reference line is a connection line between a first virtual character and a second virtual character; and performing regional adjustment on the target region by a regional adjustment unit according to the target angle and the skill release deviation to obtain an updated region, and updating the region to enable the first virtual character to release the skill in the direction to hit the second virtual character.
Therefore, the method and the device for the virtual appearance release can reduce the influence of the virtual appearance on the technology release of the first virtual character, and can improve the understanding of the user on the virtual appearance adopted by the virtual character.
Correspondingly, the embodiment of the present application further provides an electronic device, where the electronic device may be a terminal or a server, and the terminal may be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game machine, a Personal computer, and a Personal Digital Assistant (PDA).
As shown in fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device 500 includes a processor 510 having one or more processing cores, a memory 520 having one or more computer-readable storage media, and a computer program stored in the memory 520 and running on the processor. The processor 510 is electrically connected to the memory 520. Those skilled in the art will appreciate that the electronic device configurations shown in the figures do not constitute limitations of the electronic device, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The processor 510 is a control center of the electronic device 500, connects various parts of the entire electronic device 500 using various interfaces and lines, and performs various functions of the electronic device 500 and processes data by running or loading software programs and/or modules stored in the memory 520 and calling data stored in the memory 520, thereby performing overall monitoring of the electronic device 500.
In this embodiment, the processor 510 in the electronic device 500 loads instructions corresponding to processes of one or more application programs into the memory 520, and the processor 510 runs the application programs stored in the memory 520, so as to implement various functions in a prompt information generation method or a region adjustment method, according to the following steps:
a prompt message generation method comprises the following steps:
obtaining a first virtual appearance and a second virtual appearance of the virtual character, and a skill release difference function associated with the user, the second virtual appearance being used to replace the first virtual appearance;
determining a first element and a second element, the first element being a primary visual element in the first virtual appearance and the second element being a primary visual element in the second virtual appearance;
determining an element difference value, wherein the element difference value is a numerical difference value between the second element and the first element;
predicting skill release difference according to the skill release difference function and the element difference value, wherein the skill release difference represents the precision difference of skill release of the virtual character adopting the second virtual appearance relative to the skill release of the virtual character adopting the first virtual appearance;
and generating prompt information according to the skill release difference so as to prompt the user.
In some embodiments, the visual element comprises a pixel, the first element and the second element are determined, the method comprising:
obtaining color values of all pixels of the first virtual appearance and color values of all pixels of the second virtual appearance;
the first element is determined from all pixels of the first virtual appearance and the second element is determined from all pixels of the second virtual appearance according to the number of pixels of the same color value.
In some embodiments, the visual element comprises an arc, the first element and the second element are determined, and the method comprises:
acquiring all arcs forming the first virtual appearance outline and all arcs forming the second virtual appearance outline;
the first element is determined from all arcs constituting the first virtual appearance contour and the second element is determined from all arcs constituting the second virtual appearance contour according to the number of arcs of the same arc.
In some embodiments, the first element is determined from all arcs making up the first virtual appearance contour and the second element is determined from all arcs making up the second virtual appearance contour according to the number of arcs of the same arc, the method comprising:
determining a first candidate arc from all arcs constituting the first virtual appearance contour and a second candidate arc from all arcs constituting the second virtual appearance contour according to the number of arcs of the same radian;
determining a first element from the first candidate arcs according to the radian of each first candidate arc;
determining a first element from the second candidate arcs according to the radians of all the second candidate arcs.
In some embodiments, the radians of all arcs making up the first virtual appearance profile and the radians of all arcs making up the second virtual appearance profile are obtained, the method comprising:
acquiring an image with a first virtual appearance, an image with a second virtual appearance and a background image, wherein the image color of the background image is the same as the background color in the image with the first virtual appearance and the image with the second virtual appearance;
splitting the image with the first virtual appearance, the image with the second virtual appearance and the background image into a plurality of sub-images by adopting a preset splitting rule;
determining a first target sub-image from the plurality of first sub-images according to the color difference value between the first sub-image and the background sub-image, wherein the first sub-image is a sub-image in an image with a first virtual appearance, the background sub-image is a sub-image in the background image, and all color difference values corresponding to the first target sub-image comprise a color difference value equal to zero and a color difference value not equal to zero;
determining a second target sub-image from the plurality of second sub-images according to the color difference value between the second sub-image and the background sub-image, wherein the second sub-image is a sub-image in an image with a second virtual appearance, and all color difference values corresponding to the second target sub-image comprise a color difference value equal to zero and a color difference value not equal to zero;
and respectively carrying out angle detection on the corresponding areas of the color difference value which is not zero in the first target sub-image and the second target sub-image to obtain the arc radian forming the first virtual appearance outline and the arc radian forming the second virtual appearance outline.
In some embodiments, the primary visual element comprises a pixel and the element difference is a color difference of the pixel.
In some embodiments, the primary visual element further comprises an arc, the element difference being an angular difference of the arc.
In some embodiments, the skill release difference function comprises a first function and a second function, the skill release difference is predicted from the skill release difference function and the element difference value, the method comprising:
predicting a first skill release difference according to the first function and the color difference value of the pixel;
predicting a second skill release difference according to the second function and the angle difference of the arc;
the first skill release difference and the second skill release difference are taken as skill release differences.
In some embodiments, prior to obtaining the skill release difference function associated with the user, the method further comprises:
acquiring a historical virtual appearance of the virtual character, wherein the historical virtual appearance is a second virtual appearance adopted by the virtual character in historical time;
determining a historical element difference value, wherein the historical element difference value is a numerical difference value between a main visual element and a first element in the historical virtual appearance;
determining historical skill release difference, wherein the historical skill release difference is the precision difference of skill release when the virtual character adopts the historical virtual appearance relative to the skill release when the virtual character adopts the first virtual appearance;
and constructing a skill release difference function according to the historical skill release difference and the historical element difference.
In some embodiments, a historical skill release variance is determined, the method comprising:
acquiring the total number and the number of hits released by the virtual character when adopting a first virtual appearance, and acquiring the total number and the number of hits released by the virtual character when adopting a historical virtual appearance;
determining the precision of the skill release when the virtual character adopts the first virtual appearance according to the ratio of the number of hits of the skill release when the virtual character adopts the first virtual appearance to the total number;
and determining the precision of the skill release when the virtual character adopts the historical virtual appearance according to the ratio of the number of hits of the skill release when the virtual character adopts the historical virtual appearance to the total number.
Determining a historical skill release difference according to a ratio of the precision of the skill release when the virtual character adopts the historical virtual appearance to the precision of the skill release when the virtual character adopts the first virtual appearance.
A method of zone adjustment, the method comprising:
acquiring a skill release deviation function associated with a user, and a first virtual appearance, a second virtual appearance and an element difference value in any prompt information generation method provided by the embodiment of the application;
predicting skill release deviation according to a skill release deviation function and the element difference value, wherein the skill release deviation represents the angle deviation of a skill release area of the first virtual character in the second virtual appearance relative to the skill release area in the first virtual appearance;
the method comprises the steps of obtaining a skill releasing direction in a game scene when a first virtual character adopts a second virtual appearance and a target area, wherein the target area is an area for releasing the skill in the direction when the first virtual character adopts the second virtual appearance;
determining a target angle, wherein the target angle is an included angle between a boundary of a target area in a game scene and a reference line, and the reference line is a connection line between a first virtual character and a second virtual character;
and according to the target angle and the skill release deviation, performing region adjustment on the target region to obtain an updated region, and updating the region to enable the first virtual character to release the skill in the direction to hit the second virtual character.
In some embodiments, a target angle is determined, the method comprising:
determining a first angle, wherein the first angle is an included angle between the direction and a datum line;
determining a second angle, wherein the second angle is an included angle between the direction and the boundary of the target area;
a target angle is determined based on a difference between the first angle and the second angle.
In some embodiments, prior to obtaining the skill release bias function associated with the user, the method further comprises:
acquiring a difference value between a historical virtual appearance and a historical element in any prompt information generation method provided by the embodiment of the application;
determining historical skill release deviation, wherein the historical skill release deviation is the angle deviation of a skill release area when the first virtual character adopts the historical virtual appearance relative to the skill release area when the first virtual appearance is adopted;
and constructing a skill release deviation function according to the historical skill release deviation and the historical element difference value.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Optionally, as shown in fig. 5, the computer device 500 further includes: touch display screen 530, radio frequency circuit 540, audio circuit 550, input unit 560, and power supply 570. The processor 510 is electrically connected to the touch display screen 530, the rf circuit 540, the audio circuit 550, the input unit 560, and the power supply 570, respectively. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 5 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The touch display screen 530 can be used for displaying a graphical user interface and receiving operation instructions generated by a user acting on the graphical user interface. The touch display screen 530 may include a display panel and a touch panel. The display panel may be used, among other things, to display information entered by or provided to a user and various graphical user interfaces of the computer device, which may be made up of graphics, text, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel using any suitable object or accessory such as a finger, a stylus pen, and the like), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and can receive and execute commands sent by the processor 510. The touch panel may overlay the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel transmits the touch operation to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 530 to implement input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display screen 530 may also be used as a part of the input unit 560 to implement an input function.
The rf circuit 540 may be used for transceiving rf signals to establish wireless communication with a network device or other computer device through wireless communication, and for transceiving signals with the network device or other computer device.
The audio circuit 550 may be used to provide an audio interface between a user and a computer device through speakers, microphones, etc. The audio circuit 550 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 550 and converted into audio data, which is then processed by the audio data output processor 510 and then transmitted to, for example, another computer device via the rf circuit 540, or the audio data is output to the memory 520 for further processing. The audio circuit 550 may also include an earbud jack to provide communication of peripheral headphones with the computer device.
The input unit 560 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
Power supply 570 is used to power the various components of computer device 500. Optionally, the power supply 570 may be logically connected to the processor 510 through a power management system, so that the power management system may manage charging, discharging, and power consumption management functions. Power supply 570 may also include any component including one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown in fig. 5, the computer device 500 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described in detail herein.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
As can be seen from the above, the computer device provided by this embodiment may obtain a first virtual appearance and a second virtual appearance of the virtual character, and a skill release difference function associated with the user, the second virtual appearance being used to replace the first virtual appearance; determining a first element and a second element, wherein the first element is a main visual element in the first virtual appearance, the second element is a main visual element in the second virtual appearance, and the main visual element is a visual element with the largest proportion; determining an element difference value, wherein the element difference value is a numerical difference value between the second element and the first element; predicting skill release difference according to the skill release difference function and the element difference value, wherein the skill release difference represents the precision difference of skill release of the virtual character adopting the second virtual appearance relative to the skill release of the virtual character adopting the first virtual appearance; and generating prompt information according to the skill release difference so as to prompt the user.
Therefore, the embodiment of the application can improve the understanding of the user on the virtual appearance adopted by the virtual character.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application provides a computer-readable storage medium, in which a plurality of computer programs are stored, where the computer programs can be loaded by a processor to execute the steps in any one of the prompt information generating methods or one of the area adjusting methods provided in the present application. For example, the computer program may perform the steps of:
a method for generating prompt information comprises the following steps:
obtaining a first virtual appearance and a second virtual appearance of the virtual character, and a skill release difference function associated with the user, the second virtual appearance being used to replace the first virtual appearance;
determining a first element and a second element, the first element being a primary visual element in the first virtual appearance and the second element being a primary visual element in the second virtual appearance;
determining an element difference value, wherein the element difference value is a numerical difference value between the second element and the first element;
predicting skill release difference according to the skill release difference function and the element difference value, wherein the skill release difference represents the difference of the accuracy of skill release of the virtual character adopting the second virtual appearance relative to the accuracy of skill release of the virtual character adopting the first virtual appearance;
and generating prompt information according to the skill release difference so as to prompt the user.
In some embodiments, the visual element comprises a pixel, the first element and the second element are determined, the method comprising:
obtaining color values of all pixels of the first virtual appearance and color values of all pixels of the second virtual appearance;
the first element is determined from all pixels of the first virtual appearance and the second element is determined from all pixels of the second virtual appearance according to the number of pixels of the same color value.
In some embodiments, the visual element comprises an arc, the first element and the second element are determined, and the method comprises:
acquiring all arcs forming the first virtual appearance outline and all arcs forming the second virtual appearance outline;
the first element is determined from all arcs constituting the first virtual appearance contour and the second element is determined from all arcs constituting the second virtual appearance contour according to the number of arcs of the same arc.
In some embodiments, the first element is determined from all arcs making up the first virtual appearance contour and the second element is determined from all arcs making up the second virtual appearance contour according to the number of arcs of the same arc, the method comprising:
determining a first candidate arc from all arcs constituting the first virtual appearance contour and a second candidate arc from all arcs constituting the second virtual appearance contour according to the number of arcs of the same radian;
determining a first element from the first candidate arcs according to the radian of each first candidate arc;
determining a first element from the second candidate arcs according to the radians of all the second candidate arcs.
In some embodiments, the radians of all arcs making up the first virtual appearance profile and the radians of all arcs making up the second virtual appearance profile are obtained, the method comprising:
acquiring an image with a first virtual appearance, an image with a second virtual appearance and a background image, wherein the image color of the background image is the same as the background color in the image with the first virtual appearance and the image with the second virtual appearance;
splitting the image with the first virtual appearance, the image with the second virtual appearance and the background image into a plurality of sub-images by adopting a preset splitting rule;
determining a first target sub-image from the plurality of first sub-images according to the color difference value between the first sub-image and the background sub-image, wherein the first sub-image is a sub-image in an image with a first virtual appearance, the background sub-image is a sub-image in the background image, and all color difference values corresponding to the first target sub-image comprise a color difference value equal to zero and a color difference value not equal to zero;
determining a second target sub-image from the plurality of second sub-images according to the color difference value between the second sub-image and the background sub-image, wherein the second sub-image is a sub-image in an image with a second virtual appearance, and all color difference values corresponding to the second target sub-image comprise a color difference value equal to zero and a color difference value not equal to zero;
and respectively carrying out angle detection on the corresponding areas of the color difference value which is not zero in the first target sub-image and the second target sub-image to obtain the arc radian forming the first virtual appearance outline and the arc radian forming the second virtual appearance outline.
In some embodiments, the primary visual element comprises a pixel and the element difference is a color difference of the pixel.
In some embodiments, the primary visual element further comprises an arc, the element difference being an angular difference of the arc.
In some embodiments, the skill release difference function comprises a first function and a second function, the skill release difference is predicted from the skill release difference function and the element difference value, the method comprising:
predicting a first skill release difference according to the first function and the color difference value of the pixel;
predicting a second skill release difference according to the second function and the angle difference of the arc;
the first skill release difference and the second skill release difference are taken as skill release differences.
In some embodiments, prior to obtaining the skill release difference function associated with the user, the method further comprises:
acquiring a historical virtual appearance of the virtual character, wherein the historical virtual appearance is a second virtual appearance adopted by the virtual character in historical time;
determining a history element difference value, wherein the history element difference value is a numerical difference value between a main visual element and a first element in the history virtual appearance;
determining historical skill release difference, wherein the historical skill release difference is the precision difference of skill release when the virtual character adopts the historical virtual appearance relative to the skill release when the virtual character adopts the first virtual appearance;
and constructing a skill release difference function according to the historical skill release difference and the historical element difference.
In some embodiments, a historical skill release variance is determined, the method comprising:
acquiring the total number and the number of hits of the released skills when the virtual character adopts a first virtual appearance, and acquiring the total number and the number of hits of the released skills when the virtual character adopts a historical virtual appearance;
determining the precision of the skill release when the virtual character adopts the first virtual appearance according to the ratio of the number of hits of the skill release when the virtual character adopts the first virtual appearance to the total number;
and determining the precision of the skill release when the virtual character adopts the historical virtual appearance according to the ratio of the number of hits of the skill release when the virtual character adopts the historical virtual appearance to the total number.
Determining a historical skill release difference according to a ratio of the precision of the skill release when the virtual character adopts the historical virtual appearance to the precision of the skill release when the virtual character adopts the first virtual appearance.
A method of zone adjustment, the method comprising:
acquiring a skill release deviation function associated with a user, and a first virtual appearance, a second virtual appearance and an element difference value in any prompt information generation method provided by the embodiment of the application;
predicting skill release deviation according to a skill release deviation function and the element difference value, wherein the skill release deviation represents the angle deviation of a skill release area when the first virtual character adopts the second virtual appearance relative to the skill release area when the first virtual appearance is adopted;
the method comprises the steps of obtaining a skill releasing direction in a game scene when a first virtual character adopts a second virtual appearance and a target area, wherein the target area is a skill releasing area in the direction when the first virtual character adopts the second virtual appearance;
determining a target angle, wherein the target angle is an included angle between a boundary of a target area in a game scene and a reference line, and the reference line is a connection line between a first virtual character and a second virtual character;
and according to the target angle and the skill release deviation, performing region adjustment on the target region to obtain an updated region, and updating the region to enable the first virtual character to release the skill in the direction to hit the second virtual character.
In some embodiments, a target angle is determined, the method comprising:
determining a first angle, wherein the first angle is an included angle between the direction and a datum line;
determining a second angle, wherein the second angle is an included angle between the direction and the boundary of the target area;
a target angle is determined based on a difference between the first angle and the second angle.
In some embodiments, prior to obtaining the skill release deviation function associated with the user, the method further comprises:
acquiring a difference value between a historical virtual appearance and a historical element in any prompt information generation method provided by the embodiment of the application;
determining historical skill release deviation, wherein the historical skill release deviation is the angle deviation of a skill release area when the first virtual character adopts the historical virtual appearance relative to the skill release area when the first virtual appearance is adopted;
and constructing a skill release deviation function according to the historical skill release deviation and the historical element difference value.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the storage medium can execute the steps in any prompt information generation method provided in the embodiment of the present application, beneficial effects that can be achieved by any prompt information generation method provided in the embodiment of the present application can be achieved, and detailed descriptions are omitted here for the foregoing embodiment.
The method for generating a prompt message, the method for adjusting a region, and the apparatus provided in the embodiments of the present application are described in detail above, and specific examples are applied in the present application to explain the principles and embodiments of the present application, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (17)

1. A prompt information generation method applied to a game including virtual characters controlled by a user through a terminal device, the method comprising:
obtaining a first virtual appearance and a second virtual appearance of the virtual character, and a skill release difference function associated with the user, the second virtual appearance for replacing the first virtual appearance;
determining a first element and a second element, the first element being a primary visual element in the first virtual appearance and the second element being a primary visual element in the second virtual appearance;
determining an element difference value, the element difference value being a numerical difference value between the second element and the first element;
predicting skill release difference according to the skill release difference function and the element difference value, wherein the skill release difference represents precision difference of skill release of the virtual character in the second virtual appearance relative to skill release of the virtual character in the first virtual appearance;
and generating prompt information according to the skill release difference so as to prompt the user.
2. A method for hint information generation as recited in claim 1, wherein the visual element comprises a pixel, the determining a first element and a second element, the method comprising: obtaining color values of all of the pixels of the first virtual appearance and color values of all of the pixels of the second virtual appearance;
determining the first element from all the pixels of the first virtual appearance and the second element from all the pixels of the second virtual appearance according to the number of the pixels of the same color value.
3. A method for generating toast as claimed in claim 1 or claim 2 wherein said visual element comprises an arc, said determining a first element and a second element, the method comprising:
acquiring all the arcs forming the first virtual appearance outline and all the arcs forming the second virtual appearance outline;
determining the first element from all of the arcs making up the first virtual appearance profile and the second element from all of the arcs making up the second virtual appearance profile according to the number of arcs of the same arc.
4. The hint information generation method of claim 3 wherein said determining a first element from all of said arcs making up said first virtual look outline and a second element from all of said arcs making up said second virtual look outline is based on the number of said arcs of the same arc, said method comprising:
determining a first candidate arc from all of the arcs making up the first virtual appearance profile and a second candidate arc from all of the arcs making up the second virtual appearance profile according to the number of arcs of the same arc;
determining the first element from each of the first candidate arcs according to the radian of the first candidate arc;
determining the second element from each of the second candidate arcs according to the arc degree of the second candidate arc.
5. The tip information generating method according to claim 3, wherein the acquiring of the radians of all arcs constituting the first virtual appearance contour and the radians of all arcs constituting the second virtual appearance contour includes:
acquiring an image with a first virtual appearance, an image with a second virtual appearance and a background image, wherein the image color of the background image is the same as the background color in the image with the first virtual appearance and the image with the second virtual appearance;
splitting the image with the first virtual appearance, the image with the second virtual appearance and the background image into a plurality of sub-images by adopting a preset splitting rule;
determining a first target sub-image from the plurality of first sub-images according to a color difference value between the first sub-image and a background sub-image, wherein the first sub-image is a sub-image in the first virtual-appearance image, the background sub-image is a sub-image in the background image, and all the color difference values corresponding to the first target sub-image comprise a color difference value equal to zero and a color difference value different from zero;
determining a second target sub-image from the plurality of second sub-images according to a color difference value between the second sub-image and the background sub-image, wherein the second sub-image is a sub-image in the second virtual-appearance image, and all the color difference values corresponding to the second target sub-image comprise a color difference value equal to zero and a color difference value different from zero;
and respectively carrying out angle detection on the corresponding areas of the non-zero color difference values in the first target sub-image and the second target sub-image to obtain the arc radian forming the first virtual appearance outline and the arc radian forming the second virtual appearance outline.
6. The cue information generation method of claim 1 wherein the primary visual element comprises a pixel and the element difference value is a color difference value of the pixel.
7. The method of generating toast according to claim 6, wherein said primary visual element further comprises an arc, said element difference being an angular difference of said arc.
8. The tip information generating method according to claim 7, wherein the skill release difference function includes a first function and a second function, and the skill release difference is predicted from the skill release difference function and the element difference value, the method including:
predicting a first skill release difference according to the first function and the color difference value of the pixel;
predicting a second skill release difference according to the second function and the angle difference of the arc;
using the first skill release difference and the second skill release difference as skill release differences.
9. The method of generating toast as recited in claim 1, wherein prior to obtaining a skill release difference function associated with the user, the method further comprises:
obtaining a historical virtual appearance of the virtual character, wherein the historical virtual appearance is the second virtual appearance adopted by the virtual character in historical time;
determining a history element difference value, the history element difference value being a numerical difference value between a primary visual element in the history virtual appearance and the first element;
determining a historical skill release difference, wherein the historical skill release difference is a precision difference between skill release of the virtual character adopting the historical virtual appearance and skill release of the virtual character adopting the first virtual appearance;
and constructing the skill release difference function according to the historical skill release difference and the historical element difference.
10. The method of generating hints information as defined in claim 9, wherein the determining historical skill release differences comprises:
acquiring the total number and the number of hits of the released skills when the virtual character adopts the first virtual appearance, wherein the total number and the number of hits of the released skills when the virtual character adopts the historical virtual appearance;
determining the precision of the skill release of the virtual character when the virtual character adopts the first virtual appearance according to the ratio of the number of hits of the skill release when the virtual character adopts the first virtual appearance to the total number;
determining the precision of the skill release when the virtual character adopts the historical virtual appearance according to the ratio of the number of hits of the skill release when the virtual character adopts the historical virtual appearance to the total number;
determining a historical skill release difference according to a ratio of precision of skill release when the virtual character adopts the historical virtual appearance to precision of skill release when the virtual character adopts the first virtual appearance.
11. A zone adjustment method, wherein a graphical user interface is provided through a terminal device, content of the graphical user interface at least partially includes a game scene and a first virtual character and a second virtual character therein, and the first virtual character is a virtual character controlled by a user through the terminal device, the method comprising:
obtaining a skill release bias function associated with the user, and the first virtual appearance, the second virtual appearance, and the element difference values in the method of generating prompt information of any of claims 1-10;
predicting skill release deviation according to the skill release deviation function and the element difference value, wherein the skill release deviation represents angle deviation of a skill release area when the first virtual character adopts the second virtual appearance relative to the skill release area when the first virtual appearance is adopted;
acquiring a skill releasing direction in the game scene when the first virtual character adopts the second virtual appearance and a target area, wherein the target area is a skill releasing area in the direction when the first virtual character adopts the second virtual appearance;
determining a target angle, wherein the target angle is an included angle between a boundary of the target area in the game scene and a reference line, and the reference line is a connection line between the first virtual character and the second virtual character;
and according to the target angle and the skill release deviation, performing area adjustment on the target area to obtain an updated area, wherein the updated area enables the first virtual character to release the skill in the direction to hit the second virtual character.
12. The zone adjustment method of claim 11, wherein the target angle is determined, the method comprising:
determining a first angle, wherein the first angle is an included angle between the direction and the datum line;
determining a second angle, wherein the second angle is an included angle between the direction and the boundary of the target area;
determining a target angle according to a difference between the first angle and the second angle.
13. The zone adjustment method of claim 11, wherein prior to said obtaining a skill release bias function associated with the user, the method further comprises:
acquiring the difference between the historical virtual appearance and the historical element in the prompt information generation method according to claim 9;
determining a historical skill release deviation, wherein the historical skill release deviation is an angle deviation of a skill release area when the first virtual character adopts the historical virtual appearance relative to the skill release area when the first virtual appearance is adopted;
and constructing the skill release deviation function according to the historical skill release deviation and the historical element difference value.
14. A prompt information generation device, wherein the method is applied to a game including virtual characters controlled by a user via a terminal device, and the method includes:
a first obtaining unit configured to obtain a first virtual appearance and a second virtual appearance of the virtual character, and a skill release difference function associated with the user, the second virtual appearance being used to replace the first virtual appearance;
an element determination unit for determining a first element and a second element, the first element being a primary visual element in the first virtual appearance and the second element being a primary visual element in the second virtual appearance;
a difference determination unit for determining an element difference, which is a numerical difference between the second element and the first element;
a difference prediction unit, configured to predict a skill release difference according to the skill release difference and the element difference, where the skill release difference represents a precision difference between skill release of the virtual character in the second virtual appearance and skill release of the virtual character in the first virtual appearance;
and the prompting unit is used for generating prompting information according to the skill release difference so as to prompt the user.
15. An area adjustment apparatus, wherein a graphical user interface is provided through a terminal device, content of the graphical user interface at least partially includes a game scene and a first virtual character and a second virtual character therein, and the first virtual character is a virtual character controlled by a user through the terminal device, the apparatus comprising:
a second obtaining unit, configured to obtain a skill release deviation function associated with the user, and the first virtual appearance, the second virtual appearance, and the element difference in the prompt information generation method according to any one of claims 1 to 10;
a deviation determining unit, configured to predict a skill release deviation according to the skill release deviation function and the element difference, where the skill release deviation represents an angle deviation of a skill release area when the first virtual character adopts the second virtual appearance relative to a skill release area when the first virtual appearance is adopted;
a third obtaining unit, configured to obtain a direction in which skill is released in the game scene when the first virtual character adopts the second virtual appearance, and a target area, where the target area is an area in which skill is released in the direction when the first virtual character adopts the second virtual appearance;
an angle determining unit, configured to determine a target angle, where the target angle is an included angle between a boundary of the target area in the game scene and a reference line, and the reference line is a connection line between the first virtual character and the second virtual character;
and the area adjusting unit is used for performing area adjustment on the target area according to the target angle and the skill release deviation to obtain an updated area, and the updated area enables the first virtual character to release the skill in the direction to hit the second virtual character.
16. An electronic device comprising a processor and a memory, the memory storing a plurality of instructions; the processor loads instructions from the memory to perform the steps of the hint information generation method as claimed in any one of claims 1 to 10 and the region adjustment method as claimed in any one of claims 11 to 13.
17. A computer readable storage medium storing instructions adapted to be loaded by a processor to perform the steps of the hint information generation method as claimed in any one of claims 1 to 10 and the region adjustment method as claimed in any one of claims 11 to 13.
CN202210686321.6A 2022-06-16 2022-06-16 Prompt information generation method, area adjustment method and device Pending CN115040868A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210686321.6A CN115040868A (en) 2022-06-16 2022-06-16 Prompt information generation method, area adjustment method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210686321.6A CN115040868A (en) 2022-06-16 2022-06-16 Prompt information generation method, area adjustment method and device

Publications (1)

Publication Number Publication Date
CN115040868A true CN115040868A (en) 2022-09-13

Family

ID=83160987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210686321.6A Pending CN115040868A (en) 2022-06-16 2022-06-16 Prompt information generation method, area adjustment method and device

Country Status (1)

Country Link
CN (1) CN115040868A (en)

Similar Documents

Publication Publication Date Title
CN110141859B (en) Virtual object control method, device, terminal and storage medium
CN111013142B (en) Interactive effect display method and device, computer equipment and storage medium
CN111414080B (en) Method, device and equipment for displaying position of virtual object and storage medium
CN111603771B (en) Animation generation method, device, equipment and medium
WO2022227915A1 (en) Method and apparatus for displaying position marks, and device and storage medium
JP7186901B2 (en) HOTSPOT MAP DISPLAY METHOD, DEVICE, COMPUTER DEVICE AND READABLE STORAGE MEDIUM
CN111729307A (en) Virtual scene display method, device, equipment and storage medium
CN112755527A (en) Virtual character display method, device, equipment and storage medium
CN113426124A (en) Display control method and device in game, storage medium and computer equipment
CN115082608A (en) Virtual character clothing rendering method and device, electronic equipment and storage medium
CN113082707A (en) Virtual object prompting method and device, storage medium and computer equipment
CN112206517A (en) Rendering method, device, storage medium and computer equipment
TWI817208B (en) Method and apparatus for determining selected target, computer device, non-transitory computer-readable storage medium, and computer program product
CN113134232B (en) Virtual object control method, device, equipment and computer readable storage medium
CN115040868A (en) Prompt information generation method, area adjustment method and device
CN114159788A (en) Information processing method, system, mobile terminal and storage medium in game
CN114159785A (en) Virtual item discarding method and device, electronic equipment and storage medium
CN114042315A (en) Virtual scene-based graphic display method, device, equipment and medium
CN113350793B (en) Interface element setting method and device, electronic equipment and storage medium
CN113633976B (en) Operation control method, device, equipment and computer readable storage medium
CN115300904A (en) Recommendation method and device, electronic equipment and storage medium
US20240131434A1 (en) Method and apparatus for controlling put of virtual resource, computer device, and storage medium
CN115430150A (en) Game skill release method and device, computer equipment and storage medium
CN113546403A (en) Role control method, role control device, terminal and computer readable storage medium
CN116059639A (en) Virtual object control method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination