WO2017054450A1 - 一种信息处理方法、终端和计算机存储介质 - Google Patents

一种信息处理方法、终端和计算机存储介质 Download PDF

Info

Publication number
WO2017054450A1
WO2017054450A1 PCT/CN2016/081041 CN2016081041W WO2017054450A1 WO 2017054450 A1 WO2017054450 A1 WO 2017054450A1 CN 2016081041 W CN2016081041 W CN 2016081041W WO 2017054450 A1 WO2017054450 A1 WO 2017054450A1
Authority
WO
WIPO (PCT)
Prior art keywords
character
role
user interface
graphical user
character object
Prior art date
Application number
PCT/CN2016/081041
Other languages
English (en)
French (fr)
Inventor
唐永
翁建苗
陈宇
龚伟
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=55285650&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=WO2017054450(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP16850075.9A priority Critical patent/EP3273334B1/en
Priority to JP2018505518A priority patent/JP6529659B2/ja
Priority to CA2982868A priority patent/CA2982868C/en
Priority to KR1020177033331A priority patent/KR102018212B1/ko
Priority to MYPI2017703956A priority patent/MY192140A/en
Publication of WO2017054450A1 publication Critical patent/WO2017054450A1/zh
Priority to US15/725,140 priority patent/US10639549B2/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5378Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/64Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/65Methods for processing data by generating or executing the game program for computing the condition of a game character

Definitions

  • the present invention relates to information processing technologies, and in particular, to an information processing method, a terminal, and a computer storage medium.
  • Embodiments of the present invention are desirable to provide an information processing method, a terminal, and a computer storage medium. It can quickly and accurately select target objects in the process of information interaction to enhance the user experience.
  • An embodiment of the present invention provides an information processing method, by executing a software application on a processor of a terminal and rendering on a display of the terminal, to obtain a graphical user interface, the processor, a graphical user interface, and the Software applications are implemented on a gaming system; the methods include:
  • At least one character object deployed in at least one character selection area of the graphical user interface includes at least one window bit
  • the character operation object is rendered in at least one of the window bits
  • the first character object When detecting a selection operation gesture for at least one of the character object objects that is rendered according to the first display parameter, the first character object performs the first virtual operation on the corresponding second role object. At least one of them.
  • the embodiment of the present invention further provides a terminal, where the terminal includes: a rendering processing unit, a deployment unit, a detecting unit, and an operation executing unit;
  • the rendering processing unit is configured to execute a software application and render to obtain a graphical user interface, and render at least one virtual resource object on the graphical user interface, at least one of the virtual resource objects being configured according to the input a first character object that performs a first virtual operation by a user command; and is further configured to: at least one of the character operation objects associated with the second role object detected by the detecting unit according to the first display parameter Rendering in the window bit;
  • the deployment unit configured to deploy at least one role object of the at least one role selection area of the graphical user interface, including at least one window bit;
  • the detecting unit is configured to detect a second role object in the graphical user interface that meets a first preset condition by a distance from the first character object;
  • the operation execution unit is configured to: when detecting a selection operation gesture for at least one character operation object that is rendered according to the first display parameter, the first role object is corresponding to the second role The object performs at least one of the first virtual operations.
  • An embodiment of the present invention further provides a terminal, where the terminal includes: a processor and a display; the processor is configured to execute a software application and perform rendering on the display to obtain a graphical user interface, the processor, A graphical user interface and the software application are implemented on a gaming system;
  • the processor is further configured to render at least one virtual resource object on the graphical user interface, at least one of the virtual resource objects being configured to perform a first virtual operation according to the input first user command a role object;
  • At least one character object deployed in at least one character selection area of the graphical user interface includes at least one window bit
  • the character operation object is rendered in at least one of the window bits
  • the first character object When detecting a selection operation gesture for at least one of the character object objects that is rendered according to the first display parameter, the first character object performs the first virtual operation on the corresponding second role object. At least one of them.
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the information processing method according to the embodiment of the invention.
  • the information processing method, the terminal, and the computer storage medium of the embodiment of the present invention are associated with the second role object that performs information interaction on the first role object by using a window bit in the character device object of the character selection area deployed in the graphical user interface.
  • the character operation object is rendered in the corresponding window bit, and the character operation object associated with the second character object whose distance between the first character object and the first preset object meets the first preset condition is rendered according to the first display parameter.
  • the UI avatar associated with the second character object whose distance between the first character object and the first role object meets the first preset condition is rendered according to the first display parameter, so that the UI avatar has a difference from other UI avatars.
  • the display effect is convenient for the user to select the target character object quickly and accurately by selecting the operation gesture of the character operation object based on the display effect of the difference when the selected target character operates the object, thereby greatly improving the user interaction process. Operating experience.
  • FIG. 1 is a schematic diagram of an application architecture of an information processing method according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of an information processing method according to Embodiment 1 of the present invention.
  • FIG. 3 is a first schematic diagram of a graphical user interface of an information processing method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a detection principle of a second role object in which the distance between the first role object and the first role object meets the first preset condition in the information processing method according to the embodiment of the present invention
  • FIG. 5 is a second schematic diagram of a graphical user interface of an information processing method according to an embodiment of the present invention.
  • FIG. 6 is a schematic flowchart of an information processing method according to Embodiment 2 of the present invention.
  • FIG. 7 is a schematic diagram of a detection principle of a second role object in which the distance between the first role object and the first role object meets the second preset condition in the information processing method according to the embodiment of the present invention.
  • FIG. 8 is a schematic flowchart of an information processing method according to Embodiment 3 of the present invention.
  • FIG. 9 is a third schematic diagram of a graphical user interface of an information processing method according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of an interaction application of an information processing method according to an embodiment of the present invention.
  • FIG. 11 is a fourth schematic diagram of a graphical user interface in an information processing method according to an embodiment of the present invention.
  • FIG. 12 is a schematic structural diagram of a terminal of a fourth embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of a terminal of a fifth embodiment of the present invention.
  • FIG. 14 is a schematic structural diagram of a terminal of a sixth embodiment of the present invention.
  • FIG. 1 is a schematic diagram of an application architecture of an information processing method according to an embodiment of the present invention
  • the application architecture includes: a server 101 and at least one terminal, where the terminal includes: The terminal 102, the terminal 103, the terminal 104, the terminal 105, and the terminal 106, wherein the at least one terminal can establish a connection with the server 101 through a network 100 such as a wired network or a wireless network.
  • the terminal includes a mobile phone, a desktop computer, a PC, an all-in-one, and the like.
  • the processor of the terminal is capable of executing a software application and rendering on a display of the terminal to obtain a graphical user interface, the processor, the graphical user interface and the software application being implemented on the game system .
  • the at least one terminal may perform information interaction with the server 101 through a wired network or a wireless network.
  • a one-to-one or many-to-many eg, three to three, five to five
  • the one-to-one application scenario may be that the virtual resource object in the graphical user object that is rendered by the terminal interacts with the information of the virtual resource object preset in the game system (can be understood as a human-machine battle), that is, The information interaction between the terminal and the server; the one-to-one application scenario may also be a virtual resource object in a graphical user object rendered by one terminal and a virtual resource object in a graphical user object rendered by another terminal.
  • the information interaction for example, the virtual resource object in the graphical user object rendered by the terminal 102 interacts with the information of the virtual resource object in the graphical user object rendered by the terminal 103.
  • the multi-to-many application mode scenario takes a 3-to-3 application mode scenario as an example, and the terminal 1, the terminal 2, and the terminal 3 respectively render the obtained image.
  • the virtual resource objects in the user object form a first group
  • the virtual resource objects in the graphical user object respectively rendered by the terminal 4 form a second group
  • FIG. 1 is only an example of an application architecture that implements an embodiment of the present invention.
  • the embodiment of the present invention is not limited to the application structure described in FIG. 1 above, and various embodiments of the present invention are proposed based on the application architecture.
  • FIG. 2 is a schematic flowchart diagram of an information processing method according to Embodiment 1 of the present invention.
  • the information processing method is applied to a terminal, by executing a software application on a processor of the terminal and rendering on a display of the terminal to obtain a graphical user interface, the processor, the graphical user interface, and the software application Implemented on a gaming system; as shown in Figure 2, the method includes:
  • Step 201 Rendering at least one virtual resource object on the graphical user interface, at least one of the virtual resource objects being configured to perform a first character object of the first virtual operation according to the input first user command.
  • Step 202 At least one role object deployed in at least one role selection area of the graphical user interface includes at least one window bit.
  • Step 203 Detect a second role object in the graphical user interface that meets a first preset condition with a distance between the first character object, and the detected second object object according to the first display parameter.
  • the associated character action object is rendered in at least one of the window bits.
  • Step 204 When detecting a selection operation gesture for at least one character operation object that is rendered according to the first display parameter, the first role object performs the first role on the corresponding second role object. At least one of a virtual operation.
  • the graphical user interface includes at least one character selection area, where the role selection area includes at least one character object, and the role object includes at least one window. a bit, wherein at least part of the window bit carries a corresponding character operation object, wherein the character operation object is represented in the graphical user interface by an identifier of the role object associated with the role operation object (the identifier may be an avatar);
  • the second role object associated with the role operation object belongs to a different group from the first role object.
  • the rendering manner of the character device object in the character selection area includes, but is not limited to, a strip shape and a ring shape, that is, the character object object can be characterized by a character selection bar object or a character selection disk object.
  • FIG. 3 is a first schematic diagram of a graphical user interface of an information processing method according to an embodiment of the present invention
  • at least one virtual resource object is included in a graphical user interface rendered on a display of the terminal
  • the virtual resource object includes at least one first role object a10, and the user of the terminal can perform information interaction through the graphical user interface, that is, input a user command; the first role object a10 can be detected based on the terminal
  • the first user command to perform the first virtual operation; the first virtual operation includes but is not limited to: a mobile operation, a physical attack operation, a skill attack operation, and the like.
  • the first character object a10 is a character object manipulated by a user of the terminal; in the game system, the first character object a10 can be in the graphical user interface based on the operation of the user. Perform the appropriate action.
  • the graphical user interface further includes a small map 801 of the virtual area where the user role object is located; a detailed method diagram of the small map 801 is shown in 801a, and each role object can be seen. The location including the friend belonging to the first group and the enemy belonging to the second group with the first role object a10 is identified in the small map 801 at the location of the virtual area.
  • the graphical user interface further includes at least one skill object 803, and the user can control the user character object to perform a corresponding skill release operation through a skill release operation.
  • the graphical user interface has a role selection area 802; a role object is deployed in the role selection area 802.
  • the role object is characterized by a role selection bar object.
  • the character object object presents a strip display effect.
  • the character set object includes at least one window bit, and a second character interacting with the first character object
  • the object-related object operation object is rendered in the corresponding window bit;
  • the character operation object is represented by the avatar representation as an example, that is, the character selection area 802 includes at least one avatar, and the at least one avatar is respectively associated with the
  • the at least one second role object that the first character object interacts has a one-to-one correspondence. As shown in FIG.
  • a second role object belonging to a different group from the first role object a10 includes five, and correspondingly includes five in the role selection area 802.
  • a character operation object such as the character operation object b11, the character operation object b12, the character operation object b13, the character operation object b14, and the character operation object b15 shown in FIG. 3; it can be understood that five of the character selection areas 802
  • the character operation objects are in one-to-one correspondence with the five second role objects of the first role object belonging to different groups.
  • the position of the first character object changes in real time during the manipulation of the user of the terminal, and correspondingly, the position of the second character object in the graphical user interface is real-time. A change occurs. Therefore, in a process in which the first character object performs a virtual operation on the second role object, the user of the terminal does not easily select the character object to be virtualally operated. Based on this, in this embodiment, a second role object in which the distance between the graphical user interface and the first character object meets the first preset condition is detected.
  • the detecting, by the second user object, the distance between the graphical user interface and the first character object that meets the first preset condition includes: detecting the first user in the graphical user interface A second character object having a distance between objects that is less than a first predetermined threshold.
  • the object 1 is a circle, a circular area having a radius of the first predetermined threshold (R), and a range of regions included in the circular area is obtained, and the range of the area may be represented by a coordinate range;
  • An XY coordinate system is established in a virtual space in which the first character object and the second character object are located, and a coordinate range of the circular area in the XY coordinate system is acquired; further, detecting the real-time in the graphical user interface
  • the coordinates of the second character object determine the detected coordinates Whether it is in the coordinate range characterizing the circular area; when it is determined that the coordinate falls in the coordinate range characterizing the graphic area (such as the second character object 2, the second character object 3, and the The second character object 4 is in the circular area, and the second character object in the
  • FIG. 5 is a second schematic diagram of a graphical user interface of an information processing method according to an embodiment of the present invention; as shown in FIG. 5, a role operation object associated with a second role object that satisfies a first preset condition passes the first display The parameter is rendered (see the character operation object b12 shown in FIG.
  • the outer ring edge of the character operation object b12 has a rendering effect different from that of other character operation objects, so that it has a bright display effect);
  • the other character operation object, the character operation object (such as the character operation object b12) rendered by the first display parameter has a distinct distinguishing feature, so that the terminal user can immediately recognize the character operation object, thereby facilitating the The terminal user can quickly select such a character operation object with distinct distinguishing features in subsequent operations.
  • At least one role operation object that is rendered according to the first display parameter in the character bar selection object in the graphic user interface, when the terminal user triggers the at least one role operation object
  • An operation that is, when the terminal detects a selection operation gesture for the at least one character operation object, indicating that the second role object associated with the role operation object is selected; further the first role object pair
  • the second role object performs the first virtual operation.
  • the first virtual operation may be a physical attack operation. It can also be a skill release operation.
  • the first virtual operation is a physical attack operation, after the role operation object associated with the second role object is selected, the first role object directly performs a physical attack operation on the second role object.
  • the skill release operation is to be performed, the skill object is first selected by the skill selection operation gesture; after the role operation object associated with the second role object is selected, the first role object is executed for the second role object.
  • the skill release operation of the skill object is first selected by the skill selection operation gesture.
  • the role operation object associated with the second role object that performs information interaction on the first role object is corresponding to the window position in the character device object of the role selection area deployed in the graphical user interface. Rendering in the window bit, and rendering the character operation object associated with the second character object whose distance between the first character object and the first character object meets the first preset condition, according to the first display parameter, The UI avatar associated with the second character object that satisfies the first preset condition is rendered according to the first display parameter, so that the UI avatar has a display effect different from other UI avatars, so that the user is convenient
  • the target character object can be quickly and accurately selected by the selection operation gesture of the character operation object based on the display effect of the difference, thereby greatly improving the operation experience of the user in the interaction process.
  • FIG. 6 is a schematic flowchart diagram of an information processing method according to Embodiment 2 of the present invention.
  • the information processing method is applied to a terminal, by executing a software application on a processor of the terminal and rendering on a display of the terminal to obtain a graphical user interface, the processor, the graphical user interface, and the software application Implemented on a gaming system; as shown in Figure 6, the method includes:
  • Step 301 Render at least one virtual resource object on the graphical user interface, at least one of the virtual resource objects being configured to perform a first character object of the first virtual operation according to the input first user command.
  • Step 302 The at least one role object deployed in the at least one role selection area of the graphical user interface includes at least one window bit.
  • the graphical user interface includes at least one role selection area, where the role selection area includes at least one character object, and the role object includes at least one window bit, wherein at least part of the window bit carries the corresponding role.
  • An operation object wherein the character operation object is represented by an identifier of the role object associated with the role operation object (the identifier may be an avatar) in the graphical user interface; here, the role operation object is associated with The second role object belongs to a different group than the first role object.
  • the rendering manner of the character device object in the character selection area includes, but is not limited to, a strip shape and a ring shape, that is, the character object object can be characterized by a character selection bar object or a character selection disk object.
  • the graphical user interface rendered on the display of the terminal includes at least one virtual resource object; wherein the virtual resource object includes at least one first role object a10, and the terminal is used.
  • the user may perform information interaction through the graphical user interface, that is, input a user command; the first role object a10 can perform a first virtual operation based on the first user command detected by the terminal; the first virtual operation includes but Not limited to: mobile operations, physical attack operations, skill attack operations, and so on.
  • the first character object a10 is a character object manipulated by a user of the terminal; in the game system, the first character object a10 can be in the graphical user interface based on the operation of the user. Perform the appropriate action.
  • the graphical user interface further includes a small map 801 of the virtual area where the user role object is located; a detailed method diagram of the small map 801 is shown in 801a, and each role object can be seen.
  • the location including the friend belonging to the first group and the enemy belonging to the second group with the first role object a10 is identified in the small map 801 at the location of the virtual area.
  • the graphical user interface further includes at least one skill object 803, and the user can control the user character object to perform a corresponding skill release operation through a skill release operation.
  • the graphical user interface has a character selection area 802;
  • a character object is deployed in the character selection area 802.
  • the character object is characterized by a character selection bar object (ie, the character device object presents a bar display effect).
  • the character object includes at least one window bit, and the character operation object associated with the second character object interacting with the first character object is rendered in a corresponding window bit;
  • the role selection area 802 includes at least one avatar, and the at least one avatar respectively corresponds to at least one second role object that interacts with the first role object.
  • a second role object belonging to a different group from the first role object a10 includes five, and correspondingly includes five in the role selection area 802.
  • a character operation object such as the character operation object b11, the character operation object b12, the character operation object b13, the character operation object b14, and the character operation object b15 shown in FIG. 3; it can be understood that five of the character selection areas 802
  • the character operation objects are in one-to-one correspondence with the five second role objects of the first role object belonging to different groups.
  • the position of the first character object changes in real time during the manipulation of the user of the terminal, and correspondingly, the position of the second character object in the graphical user interface is real-time.
  • a change occurs, and based on this, in a process in which the first character object performs a virtual operation on the second role object, the user of the terminal does not easily select the character object to be virtualized.
  • a second role object in which the distance between the graphical user interface and the first character object meets the first preset condition is detected.
  • Step 303 Detect a second role object in the graphical user interface that meets a first preset condition with a distance between the first character object, and the detected second object object according to the first display parameter.
  • the associated character action object is rendered in at least one of the window bits.
  • the detecting the second role object that the distance between the graphical user interface and the first character object meets the first preset condition comprises: detecting the first user object in the graphical user interface A second character object having a distance less than a first predetermined threshold.
  • detecting that the first character object 1 is centered, and the first preset threshold (R) is half a circular area of the path the range of the area included in the circular area is obtained, and the range of the area may be represented by a coordinate range; that is, the virtual space in which the first character object and the second role object are located is established.
  • XY coordinate system acquiring a coordinate range of the circular area in the XY coordinate system; further detecting a coordinate of the second character object in the graphical user interface in real time, determining whether the detected coordinate is in the characterizing In the coordinate range of the circular area; when it is determined that the coordinates fall within the coordinate range characterizing the graphic area (such as the second character object 2, the second character object 3, and the second character object 4 as shown in FIG. 4 In the circular area, a second character object in the graphical user interface that is less than the first preset threshold is detected.
  • the first preset threshold satisfies an attack distance or a skill release distance of the first character object, so that the second character object can be quickly selected and used by the first role object in a subsequent operation. Perform a virtual operation.
  • Step 304 Detecting that the distance between the first character object and the first character object meets the first preset condition, and the distance between the first character object and the first character object meets at least part of the second preset condition.
  • the character object renders the character operation object associated with the detected at least part of the second character object in the at least one of the window bits according to the second display parameter.
  • Two role objects including:
  • the second role object that reaches a second preset threshold value;
  • the second preset threshold is greater than or equal to the first preset threshold.
  • FIG. 7 is a schematic diagram of a detection principle of a second role object in which the distance between the first character object and the first character object meets the second preset condition in the information processing method according to the embodiment of the present invention; FIG. 4 and FIG.
  • the second character object in which the distance between the character objects satisfies the first preset condition (the second character object 2, the second character object 3, and the second character object 4 as shown in FIG.
  • a first preset condition that is, a second character object whose previous coordinate value is in a circular area having a radius of a first preset threshold (R); due to the position of the second character object in the graphical user interface
  • Real-time detection changes, based on this, before step 305, that is, before detecting a selection operation gesture for at least one of the character object objects that is rendered according to the first display parameter, the real-time detection of the previous coordinate value is a coordinate value of the second character object in a circular area having a radius of the first preset threshold (R), and determining whether the coordinate value is at the second preset threshold (r shown in FIG. 7) a circular area in which the first character object is a center of the circle; in the illustration shown in FIG.
  • the second preset threshold (r) is greater than the first predetermined threshold (R), that is, The first coordinate object is in the second character object in the circular area with the radius of the first preset threshold.
  • R first predetermined threshold
  • the second preset threshold (r) is greater than the first predetermined threshold (R)
  • the second preset threshold (r) is greater than the first pre- Threshold (R) and reaching the second preset threshold (r), such as the second character object 4 shown in FIG. 7, further releasing the role operation object associated with the at least part of the second character object can be selected
  • An operation state and the character operation object is rendered in the corresponding window position according to the second display parameter.
  • the second display parameter may be a normal display parameter, that is, in the graphical user interface, except for the role operation object displayed according to the first display parameter, the remaining virtual resource objects are according to the The second display parameter is rendered.
  • Step 305 When detecting a selection operation gesture for at least one character operation object that is rendered according to the first display parameter, the first role object performs the first role on the corresponding second role object. At least one of a virtual operation.
  • the character operation object is rendered in a corresponding window bit according to the first display parameter.
  • a character operation object associated with a second character object that satisfies a first preset condition is rendered by the first display parameter.
  • the outer ring edge of the character operation object b12 has a rendering effect different from that of other character operation objects, so as to have a bright display effect); compared to other characters
  • the character operation object (such as the character operation object b12) rendered by the first display parameter has obvious distinguishing features, so that the terminal user can immediately recognize the character operation object, thereby facilitating use by the terminal. In subsequent operations, the character action object with such distinct features can be quickly selected.
  • At least one role operation object that is rendered according to the first display parameter in the character bar selection object in the graphic user interface, when the terminal user triggers the at least one role operation object
  • An operation that is, when the terminal detects a selection operation gesture for the at least one character operation object, indicating that the second role object associated with the role operation object is selected; further the first role object pair
  • the second role object performs the first virtual operation.
  • the first virtual operation may be a physical attack operation or a skill release operation.
  • the first virtual operation is a physical attack operation
  • the first role object associated with the second role object is selected, the first role object directly performs a physical attack operation on the second role object.
  • the skill release operation is to be performed, the skill object is first selected by the skill selection operation gesture; after the role operation object associated with the second role object is selected, the first role object is executed for the second role object.
  • the skill release operation of the skill object is first selected by the skill selection operation gesture.
  • the role operation object associated with the second role object that performs information interaction on the first role object is corresponding to the window position in the character device object of the role selection area deployed in the graphical user interface. Rendering in the window bit, and rendering the character operation object associated with the second character object whose distance between the first character object and the first character object meets the first preset condition, according to the first display parameter, The UI avatar associated with the second character object that satisfies the first preset condition is rendered according to the first display parameter, so that the UI avatar has a display effect different from other UI avatars, so that the user is convenient
  • the target character object can be quickly and accurately selected by the selection operation gesture of the character operation object based on the display effect of the difference, thereby greatly improving the operation experience of the user in the interaction process.
  • FIG. 8 is a schematic flowchart diagram of an information processing method according to Embodiment 3 of the present invention.
  • the information processing method is applied to a terminal, by executing a software application on a processor of the terminal and rendering on a display of the terminal to obtain a graphical user interface, the processor, the graphical user interface, and the software application Implemented on a gaming system; as shown in Figure 8, the method includes:
  • Step 401 Render at least one virtual resource object on the graphical user interface, at least one of the virtual resource objects being configured to execute a first role object of the first virtual operation according to the input first user command.
  • Step 402 The at least one role object deployed in the at least one role selection area of the graphical user interface includes at least one window bit.
  • the graphical user interface includes at least one role selection area, where the role selection area includes at least one character object, and the role object includes at least one window bit, wherein at least part of the window bit carries the corresponding role.
  • An operation object wherein the character operation object is represented by an identifier of the role object associated with the role operation object (the identifier may be an avatar) in the graphical user interface; here, the role operation object is associated with The second role object belongs to a different group than the first role object.
  • the rendering manner of the character device object in the character selection area includes, but is not limited to, a strip shape and a ring shape, that is, the character object object can be characterized by a character selection bar object or a character selection disk object.
  • the graphical user interface rendered on the display of the terminal includes at least one virtual resource object; wherein the virtual resource object includes at least one first role object a10, and the terminal is used.
  • Information can be exchanged through the graphical user interface Interacting with each other, that is, inputting a user command; the first role object a10 can perform a first virtual operation based on the first user command detected by the terminal; the first virtual operation includes but is not limited to: a mobile operation, a physical attack operation, Skill attack operations and more.
  • the first character object a10 is a character object manipulated by a user of the terminal; in the game system, the first character object a10 can be in the graphical user interface based on the operation of the user.
  • the graphical user interface further includes a small map 801 of the virtual area where the user role object is located; a detailed method diagram of the small map 801 is shown in 801a, and each role object can be seen.
  • the location including the friend belonging to the first group and the enemy belonging to the second group with the first role object a10 is identified in the small map 801 at the location of the virtual area.
  • the graphical user interface further includes at least one skill object 803, and the user can control the user character object to perform a corresponding skill release operation through a skill release operation.
  • the graphical user interface has a role selection area 802; a role object is deployed in the role selection area 802.
  • the role object is characterized by a role selection bar object.
  • the character object object presents a strip display effect.
  • the character object includes at least one window bit, and the character operation object associated with the second character object interacting with the first character object is rendered in a corresponding window bit; and the character operation object is represented by the avatar as
  • the role selection area 802 includes at least one avatar, and the at least one avatar respectively corresponds to at least one second role object that interacts with the first role object.
  • FIG. 1 the role selection area 802
  • the at least one avatar respectively corresponds to at least one second role object that interacts with the first role object.
  • a second role object belonging to a different group from the first role object a10 includes five, and correspondingly includes five in the role selection area 802.
  • a character operation object such as the character operation object b11, the character operation object b12, the character operation object b13, the character operation object b14, and the character operation object b15 shown in FIG. 3; it can be understood that five of the character selection areas 802
  • the character operation objects are in one-to-one correspondence with the second role objects of the first role object belonging to different groups.
  • the location of the first role object changes in real time.
  • the location of the second role object in the graphical user interface changes in real time.
  • the first role object performs a virtual operation on the second role object.
  • the user of the terminal does not easily select the character object to be virtualized.
  • a second role object in which the distance between the graphical user interface and the first character object meets the first preset condition is detected.
  • Step 403 Detect a second role object in the graphical user interface that meets a first preset condition with a distance between the first character object, and the detected second object object according to the first display parameter.
  • the associated character action object is rendered in at least one of the window bits.
  • the detecting the second role object that the distance between the graphical user interface and the first character object meets the first preset condition comprises: detecting the first user object in the graphical user interface A second character object having a distance less than a first predetermined threshold.
  • a circular area having a radius of the first predetermined threshold (R) centered on the first character object 1 is detected, and a range of regions included in the circular area is acquired.
  • the area range may be represented by a coordinate range; that is, an XY coordinate system is established in a virtual space in which the first character object and the second character object are located, and a coordinate range of the circular area in the XY coordinate system is acquired.
  • detecting coordinates of the second character object in the graphical user interface in real time determining whether the detected coordinates are in a coordinate range characterizing the circular area; when determining that coordinates are falling on the graphic area In the coordinate range (as shown in FIG. 4, the second character object 2, the second character object 3, and the second character object 4 are all in the circular area), then the graphical user interface is detected
  • the first preset threshold satisfies an attack distance or a skill release distance of the first character object, so that the second character object can be quickly selected and used by the first role object in a subsequent operation. Perform a virtual operation.
  • Step 404 When detecting a selection operation gesture for at least one character operation object that is rendered according to the first display parameter, the first role object is corresponding to the first The second role object performs at least one of the first virtual operations.
  • the character operation object is rendered in a corresponding window bit according to the first display parameter.
  • a character operation object associated with a second character object that satisfies a first preset condition is rendered by the first display parameter (see the character operation object b12 shown in FIG.
  • the character operation object The outer ring edge of b12 has a rendering effect different from that of other character operation objects, so as to have a bright display effect); the character operation object rendered by the first display parameter is compared with other character operation objects (for example, the role operation object b12) has obvious distinguishing features, so that the terminal user can immediately recognize the character operation object, thereby facilitating the terminal user to quickly operate the character operation object with the distinct distinguishing feature in subsequent operations. Make a selection.
  • At least one role operation object that is rendered according to the first display parameter in the character bar selection object in the graphic user interface, when the terminal user triggers the at least one role operation object
  • An operation that is, when the terminal detects a selection operation gesture for the at least one character operation object, indicating that the second role object associated with the role operation object is selected; further the first role object pair
  • the second role object performs the first virtual operation.
  • the first virtual operation may be a physical attack operation or a skill release operation.
  • the first virtual operation is a physical attack operation
  • the first role object associated with the second role object is selected, the first role object directly performs a physical attack operation on the second role object.
  • the skill release operation is to be performed, the skill object is first selected by the skill selection operation gesture; after the role operation object associated with the second role object is selected, the first role object is executed for the second role object.
  • the skill release operation of the skill object is first selected by the skill selection operation gesture.
  • Step 405 Acquire state attribute information of the second role object in the graphic user interface, Synchronizing the state attribute information to a server; and obtaining state attribute information of a role object associated with the role operation object in the roler object from the server.
  • the terminal acquires state attribute information of the second role object in the graphical user interface; the virtual space in which the first role object and the second role object are located may be relatively relatively based on the setting of the software application.
  • the terminal obtains state attribute information of the second role object included in the graphic user interface, and synchronizes the second role object corresponding to the state attribute information to the server.
  • the status attribute information includes, but is not limited to, a blood volume value, a life value, or skill attribute information of the second role object.
  • the terminal obtains state attribute information of the role object associated with the role operation object in the role object from the server according to a preset rule, so that when the terminal's own graphical user interface does not
  • the second role object and the associated state attribute information in the server may be synchronized by other terminals, thereby obtaining the Status attribute information of the second role object associated with all role action objects included in the character object.
  • the terminal and the other terminal belong to the same group. It can be understood that the first character object that is controlled by the terminal and the first role object that is controlled by the other terminal belong to the same group in the game system.
  • Groups collectively perform virtual operations on second role objects that belong to another group.
  • the graphical user interface of the other terminal may include at least part of the second role object, thereby based on at least one terminal belonging to the same group. Acquiring state attribute information of the second role object included in the graphical user interface, and realizing synchronization of state attribute information of the second role object.
  • Step 406 The second role pair is preset according to the obtained state attribute information.
  • the associated role manipulation object is rendered in the corresponding window position.
  • FIG. 9 is a third schematic diagram of a graphical user interface of an information processing method according to an embodiment of the present invention. As shown in FIG. 9 , the state attribute information is used as an example of a blood volume value, and is associated with a second role object.
  • the outer ring area of the character operation object (see the character operation object b22 in Fig.
  • the second role object included in the graphical user interface of the terminal and the other terminal may not include all second role objects belonging to another group that interact with the terminal.
  • the group members belonging to the first group include: group member 1, group member 2, group member 3, group member 4, and group member 5; group members belonging to the second group include: Group member 6, group member 7, group member 8, group member 9, and group member 10.
  • the terminal manipulates the group member 1, only the group member 6 is included in the view image of the graphical user interface of the terminal; in the view image of the graphical user interface of the other terminal belonging to the group member of the first group Including the group member 7, the group member 8 and the group member 9, the group member 10 does not exist in the field of view image of the graphical user interface of any terminal controlled by the group members belonging to the first group.
  • the character operation object b21 presents a display effect different from the other operation objects, specifically a gray effect display, indicating that the second role object corresponding to the character operation object b21 is not in the first role object.
  • the field of view image of a10 is also not in the field of view image of the other character object belonging to the same group as the first character object a10; correspondingly, the outer ring area of the character operation object b21 does not display the character operation object Status attribute information of the second role object associated with b21.
  • the role operation object associated with the second role object that performs information interaction on the first role object by using the window bit in the role object of the role selection area in the graphical user interface Rendering in the corresponding window bit, and rendering the character operation object associated with the second character object whose distance between the first character object and the first preset object meets the first preset condition, according to the first display parameter
  • the UI avatar associated with the second character object that meets the first preset condition is rendered according to the first display parameter, so that the UI avatar has a display effect different from other UI avatars, and is convenient.
  • the user can select the target character object quickly and accurately by the selection operation gesture of the character operation object based on the display effect of the difference, thereby greatly improving the operation experience of the user in the interaction process.
  • the state of the second role object associated with the character operation object in the character object is obtained by synchronizing the state attribute information of the second character object in the view image of the character object (ie, teammate) belonging to the same group. Attribute information, and the state attribute information is rendered in a corresponding window position in a specific manner, that is, the state attribute information of the second character object (ie, the enemy) is reflected on the corresponding character operation object (UI avatar), which is convenient for the user.
  • the status attribute information of the second role object (ie, the enemy) can be quickly learned, and the user's operation experience is improved in the information interaction process.
  • the application scenario of the one-to-one is taken as an example for detailed description.
  • the one-to-one application scenario is an application scenario in which the first role object controlled by the terminal 1 and the second role object controlled by the terminal 2 are used for information interaction, and the remaining application scenarios may refer to the description of the application scenario, in this embodiment. Do not repeat them.
  • 10 is a schematic diagram of an interaction application of an information processing method according to an embodiment of the present invention; as shown in FIG. 10, in the application scenario, the terminal 1 includes a terminal 1, a terminal 2, and a server, and the terminal 1 performs trigger control by using the user 1; The terminal 2 performs trigger control by the user 2; the method includes:
  • Step 11 User 1 can trigger the game system and log in to the authentication information, which can be a username and password.
  • Step 12 The terminal 1 transmits the obtained authentication information to the server 3, and the server 3 performs identity verification, and after the identity verification is passed, returns to the first graphical user interface to the terminal 1;
  • the first graphical user interface includes a first role object capable of performing a virtual operation based on a triggering operation of the user 1, the virtual operation including a moving operation of the first character object, the first character Objects attack operations or skill release operations for other character objects, and so on.
  • Step 21 User 2 can trigger the game system and log in to the authentication information, which can be a username and password.
  • Step 22 The terminal 2 transmits the obtained authentication information to the server 3, and the server 3 performs identity verification, and after the identity verification is passed, returns to the second graphical user interface to the terminal 2;
  • the second graphical user interface includes a second role object capable of performing a virtual operation based on a triggering operation of the user 2, the virtual operation including a moving operation of the second character object, the second character Objects attack operations or skill release operations for other character objects, and so on.
  • the first role object when the user 1 and the user 2 make the first role object and the second role object as objects of information interaction based on a triggering operation, that is, the first role object
  • the second character object is used as a target interaction object.
  • the second character object uses the first role object as a target interaction object, that is, the user 1 and the user 2 play as a game.
  • the window bit of the character object of the character selection area of the first graphical user interface renders the character operation object associated with the second character object; correspondingly, the role selection area of the second graphical user interface
  • the window bit of the character object object renders the character action object associated with the first character object.
  • the terminal 1 detects the distance between the second character object and the first character object in real time, and when the distance is within the first preset threshold range, the second role is determined according to the first display parameter.
  • the object operation object associated with the object is rendered in the window bit, that is, the character operation object is highlighted;
  • the terminal 2 detects the distance between the first character object and the second character object in real time, and when the distance is within the first preset threshold range, the first role is determined according to the first display parameter.
  • the object operation object associated with the object is rendered in the window bit, that is, the character operation object is highlighted.
  • step 13 the user performs a triggering operation on the first graphical user interface presented by the terminal 1, and the triggering operation may be directed to any virtual resource object in the first graphical user interface, including for any skill object.
  • a skill release operation an information interaction operation for any character object (which can be understood as a physical attack operation), a movement operation of the first character object, and the like.
  • the triggering operation is a selection gesture operation for a character operation object that is rendered according to the first display parameter in a character device object of the first graphical user interface.
  • Step 14 When the terminal 1 acquires the triggering operation, it identifies an instruction corresponding to the triggering operation gesture, and executes the instruction; for example, executing a skill release instruction for the corresponding operation object, and executing an information interaction instruction for the corresponding role object (eg, Physical attack instructions), execution of move instructions, and so on. And, in the process of executing the instruction, the change of the corresponding data is recorded.
  • the terminal 1 acquires a selection gesture operation of the character operation object that is rendered according to the first display parameter, a corresponding first instruction is generated, and the first instruction is executed to control the first
  • the role object performs a virtual operation (such as a physical attack operation or a skill release operation) on the corresponding second role object.
  • step 15 the changed data is synchronized to the server 3 as the first data corresponding to the terminal 1.
  • step 23 the user performs a triggering operation on the second graphical user interface presented by the terminal 2, and the triggering operation may be directed to any virtual resource object in the second graphical user interface, including for any skill object. Skill release operation, information interaction for any role object (can be understood as a physical attack operation), movement of the second character object, etc. Wait.
  • the triggering operation is a selection gesture operation for a character operation object that is rendered according to the first display parameter in a character device object of the second graphical user interface.
  • Step 24 When the terminal 2 acquires the triggering operation, it identifies an instruction corresponding to the triggering operation gesture, and executes the instruction; for example, executing a skill release instruction for the corresponding operation object, and executing an information interaction instruction for the corresponding role object (eg, Physical attack instructions), execution of move instructions, and so on. And, in the process of executing the instruction, the change of the corresponding data is recorded.
  • the terminal 2 acquires a selection gesture operation of the character operation object that is rendered according to the first display parameter, a second instruction is generated, and the second instruction is executed to control the second
  • the role object performs a virtual operation (such as a physical attack operation or a skill release operation) on the corresponding first role object.
  • step 25 the changed data is synchronized to the server 3 as the second data corresponding to the terminal 2.
  • step 30 the data is updated based on the first data synchronized by the terminal 1 and the second data synchronized by the terminal 2, and the updated data is synchronized to the terminal 1 and the terminal 2, respectively.
  • the application scenario relates to a Multiplayer Online Battle Arena Games (MOBA).
  • MOBA Multiplayer Online Battle Arena Games
  • the technical terms involved in MOBA are: 1) UI layer: the icon in the graphical user interface; 2) skill indicator: special effects, aperture, operation to assist the release of skills; 3) virtual lens: can be understood as a game The camera; 4) Small map: The reduced version of the big map can be understood as a radar map, and the information and location of the enemy will be displayed on the map.
  • FIG. 11 is a fourth schematic diagram of a graphical user interface in an information processing method according to an embodiment of the present invention; the present application is based on an application scenario of an actual interaction process.
  • the first character object 93 and the at least one skill object 92 are rendered in the embodiment; the first role object 93 can perform a corresponding virtual operation based on a trigger operation of the user.
  • the graphical user interface 90 further includes a character selection area 91, the character selection area 91 including a character player object;
  • the object includes five window bits in the present illustration, and each window bit respectively renders a character operation object, including a character operation object 911, a character operation object 912, a character operation object 913, a character operation object 914, and a character operation object 915;
  • Each of the character operation objects is associated with a role object;
  • each of the five character objects is a role object belonging to a different group from the first role object 93, that is, five role objects are the enemy of the first role object 93. Interact.
  • the first role object 93 detects a second role object in the graphical user interface 90 that meets a first preset threshold in a distance from the first character object 93, and the corresponding The character operation object associated with the second character object is rendered according to the first display parameter.
  • the first preset threshold may be set as the skill release distance of the skill object according to actual needs, and is not limited to the above setting manner.
  • the character operation object 913 shown in FIG. 10 has a bright display effect compared to other character operation objects. Based on this, the user can quickly and accurately select the target character object by selecting the operation gesture of the character operation object based on the display effect of the difference, thereby performing virtual operation on the target character object, thereby greatly improving the user interaction process. Operating experience.
  • the embodiment of the invention further provides a terminal.
  • 12 is a schematic structural diagram of a terminal according to Embodiment 4 of the present invention; as shown in FIG. 12, the terminal includes: a rendering processing unit 61, a deployment unit 62, a detecting unit 63, and an operation executing unit 64;
  • the rendering processing unit 61 is configured to execute a software application and render to obtain a graphical user interface; render at least one virtual resource object on the graphical user interface, at least one of the virtual resource objects configured to be input according to a first character object that performs a first virtual operation by the first user command; and is further configured to: at least one character operation object associated with the second character object detected by the detecting unit 63 according to the first display parameter Rendering in the window bit;
  • the deployment unit 62, the at least one role object configured to be deployed in the at least one role selection area of the graphical user interface includes at least one window bit;
  • the detecting unit 63 is configured to detect a second role object in the graphical user interface that meets a first preset condition by a distance from the first character object;
  • the operation execution unit 64 is configured to: when detecting a selection operation gesture for at least one character operation object that is rendered according to the first display parameter, the first role object pair corresponds to a second The role object performs at least one of the first virtual operations.
  • the graphical user interface includes at least one role selection area, where the role selection area includes at least one character object, and the role object includes at least one window bit, wherein at least part of the window bit carries the corresponding role.
  • An operation object wherein the character operation object is represented by an identifier of the role object associated with the role operation object (the identifier may be an avatar) in the graphical user interface; here, the role operation object is associated with The second role object belongs to a different group than the first role object.
  • the rendering manner of the character device object in the character selection area includes, but is not limited to, a strip shape and a ring shape, that is, the character object object can be characterized by a character selection bar object or a character selection disk object.
  • the graphical user interface rendered by the rendering processing unit 61 includes at least one virtual resource object, where the virtual resource object includes at least one first role object a10, and the terminal
  • the user can perform information interaction through the graphical user interface, that is, input a user command; the first role object a10 can perform a first virtual operation based on the first user command detected by the terminal; the first virtual operation includes But not limited to: mobile operations, physical attack operations, skill attack operations, and so on.
  • the first character object a10 is a character object manipulated by a user of the terminal; in the game system, the first character object a10 can be in the graphical user interface based on the operation of the user. Perform the appropriate action.
  • the graphical user interface further includes a small map 801 of the virtual area where the user role object is located; a detailed method diagram of the small map 801 is shown in 801a, and each role object can be seen. (including a friend belonging to the first group and the enemy belonging to the second group with the first character object a10) in the virtual area at the location of the small area Figure 801 identifies.
  • the graphical user interface further includes at least one skill object 803, and the user can control the user character object to perform a corresponding skill release operation through a skill release operation.
  • the deployment unit 62 deploys a role selection area 802 in the graphical user interface; a role object is deployed in the role selection area 802.
  • the role object is Characterization by the role selection bar object (ie, the character object object presents a strip display effect).
  • the character object includes at least one window bit, and the character operation object associated with the second character object interacting with the first character object is rendered in a corresponding window bit, and the character operation object is represented by the avatar as
  • the role selection area 802 includes at least one avatar, and the at least one avatar respectively corresponds to at least one second role object that interacts with the first role object. As shown in FIG.
  • a second role object belonging to a different group from the first role object a10 includes five, and correspondingly includes five in the role selection area 802.
  • a character operation object such as the character operation object b11, the character operation object b12, the character operation object b13, the character operation object b14, and the character operation object b15 shown in FIG. 3; it can be understood that five of the character selection areas 802
  • the character operation objects are in one-to-one correspondence with the second role objects of the first role object belonging to different groups.
  • the position of the first character object changes in real time during the manipulation of the user of the terminal, and correspondingly, the position of the second character object in the graphical user interface is real-time.
  • a change occurs, and based on this, in a process in which the first character object performs a virtual operation on the second role object, the user of the terminal does not easily select the character object to be virtualized.
  • the detecting unit 63 detects a second role object in the graphical user interface that meets the first preset condition by the distance between the first character object.
  • the detecting unit 63 is configured to detect a second role object in the graphical user interface that is smaller than a first preset threshold by a distance between the first character object.
  • the detecting unit 63 detects that the first character object 1 is centered, Obtaining, by a circular area having a radius of the first preset threshold (R), a range of regions included in the circular area, where the range of the area may be represented by a coordinate range; that is, in the first role object and Establishing an XY coordinate system in the virtual space where the second character object is located, and acquiring a coordinate range of the circular area in the XY coordinate system; further, the detecting unit 63 detects the real-time in the graphical user interface.
  • R radius of the first preset threshold
  • a coordinate of the second character object determining whether the detected coordinate is in a coordinate range characterizing the circular area; when it is determined that the coordinate falls in a coordinate range characterizing the graphic area (as shown in FIG. 4 If the second character object 2, the second character object 3, and the second character object 4 are both in the circular area, detecting that the distance between the graphical user interface and the first character object is less than the first pre- Set the second role object of the threshold.
  • the first preset threshold satisfies an attack distance or a skill release distance of the first character object, so that the second character object can be quickly selected and used by the first role object in a subsequent operation. Perform a virtual operation.
  • the detecting unit 63 is further configured to detect, between the second role object that the distance between the first character object and the first role object meets the first preset condition, and the first role object At least a portion of the second character object that meets the second predetermined condition;
  • the rendering processing unit 61 is configured to perform, in the at least one of the window positions, a character operation object associated with at least a part of the second character object detected by the detecting unit 63 according to the second display parameter. Rendering; wherein the rendering according to the second display parameter is different from the rendering performed by the first display parameter.
  • the detecting unit 63 is configured to detect that a distance between the second character object and the first character object meets a first preset condition, and a distance from the first character object reaches a second And a second role object of the preset threshold; wherein the second preset threshold is greater than or equal to the first preset threshold.
  • the position in the graphical user interface changes in real time, based on which the detecting unit 63 detects a selection operation gesture for at least one of the character object objects that has been rendered according to the first display parameter And detecting, in real time, a coordinate value of the second character object in the circular region whose previous coordinate value is a radius with the first preset threshold (R), and determining whether the coordinate value is at the second preset threshold ( r) shown in FIG.
  • the second preset threshold (r) is larger than the first pre- Setting a threshold (R), that is, a second character object in which the previous coordinate value is in a circular area with a radius of the first preset threshold, with at least part of the real-time movement of the first character object and the second character object
  • R pre- Setting a threshold
  • the second character object moves to be between the first character object
  • the distance is greater than the first preset threshold (R) and reaches the second preset threshold (r), such as the second character object 4 shown in FIG.
  • the character operation object can be selected for the operation state, and the character operation object is rendered in the corresponding window position according to the second display parameter.
  • the second display parameter may be a normal display parameter, that is, in the graphical user interface, except for the role operation object displayed according to the first display parameter, the remaining virtual resource objects are according to the The second display parameter is rendered.
  • the rendering processing unit 61 renders the character operation object associated with the second character object that satisfies the first preset condition in the corresponding window bit according to the first display parameter.
  • a character operation object associated with a second character object that satisfies a first preset condition is rendered by the first display parameter (see the character operation object b12 shown in FIG. 5, the character operation object
  • the outer ring edge of b12 has a rendering effect different from that of other character operation objects, so as to have a bright display effect);
  • the character operation object rendered by the first display parameter is compared with other character operation objects (
  • the role operation object b12) has obvious distinguishing features, so that the terminal user can immediately recognize the role operation.
  • the object is such that the end user can quickly select such a character operation object with distinct distinguishing features in subsequent operations.
  • the functions of the processing units in the terminal of the embodiment of the present invention can be understood by referring to the related description of the information processing method, and the processing units in the information processing terminal according to the embodiments of the present invention can implement the present invention.
  • the implementation of the analog circuit of the functions described in the embodiments can also be implemented by running the software of the functions described in the embodiments of the present invention on the smart terminal.
  • an embodiment of the present invention further provides a terminal.
  • 13 is a schematic structural diagram of a terminal according to Embodiment 5 of the present invention; as shown in FIG. 13, the terminal includes: a rendering processing unit 61, a deployment unit 62, a detecting unit 63, an operation executing unit 64, an obtaining unit 65, and a communication unit. 66; Among them,
  • the rendering processing unit 61 is configured to execute a software application and render to obtain a graphical user interface; render at least one virtual resource object on the graphical user interface, at least one of the virtual resource objects configured to be input according to a first character object that performs a first virtual operation by the first user command; and is further configured to: at least one character operation object associated with the second character object detected by the detecting unit 63 according to the first display parameter Rendering in the window bit;
  • the deployment unit 62, the at least one role object configured to be deployed in the at least one role selection area of the graphical user interface includes at least one window bit;
  • the detecting unit 63 is configured to detect a second role object in the graphical user interface that meets a first preset condition by a distance from the first character object;
  • the operation execution unit 64 is configured to: when detecting a selection operation gesture for at least one character operation object that is rendered according to the first display parameter, the first role object pair corresponds to a second The role object performs at least one of the first virtual operations;
  • the obtaining unit 65 is configured to acquire state attribute information of the second role object in the graphical user interface
  • the communication unit 66 is configured to synchronize the state attribute information acquired by the acquiring unit 65 to the server, and obtain state attribute information of the role object associated with the role operation object in the role object from the server.
  • the rendering processing unit 61 is configured to render the role operation object associated with the second role object in the corresponding window position according to the obtained state attribute information in a preset manner.
  • the obtaining unit 65 acquires state attribute information of the second role object in the graphical user interface; and the virtual space in which the first role object and the second role object are located are based on the setting of the software application.
  • the second role object may be included in the business image displayed by the graphic user interface rendered by the terminal, and may or may not include the The second role object.
  • the terminal obtains state attribute information of the second role object included in the graphic user interface, and synchronizes the second role object corresponding to the state attribute information to the server.
  • the status attribute information includes, but is not limited to, a blood volume value, a life value, or skill attribute information of the second role object.
  • the communication unit 66 obtains state attribute information of the role object associated with the role operation object in the role object from the server according to a preset rule, so that the graphic user interface of the terminal itself
  • the second role object and the associated state attribute information in the server may be synchronized by other terminals, thereby obtaining Status attribute letter of the second role object associated with all role manipulation objects included in the roler object interest.
  • the terminal and the other terminal belong to the same group. It can be understood that the first character object that is controlled by the terminal and the first role object that is controlled by the other terminal belong to the same group in the game system.
  • Groups collectively perform virtual operations on second role objects that belong to another group.
  • the graphical user interface of the other terminal may include at least part of the second role object, thereby based on at least one terminal belonging to the same group. Acquiring state attribute information of the second role object included in the graphical user interface, and realizing synchronization of state attribute information of the second role object.
  • the role operation object associated with the second role object is correspondingly arranged in the role object. Rendering in the window bit.
  • the state attribute information is taken as an example of the blood volume value
  • the outer ring area of the character operation object (shown by the character operation object b22 in FIG. 9) associated with the second character object is used as
  • the blood cell display area b221 represents the current blood volume value of the corresponding second character object by the proportional relationship of the blood volume in the blood cell display area b221 in the blood cell display area b221.
  • the manner in which the state attribute information in the embodiment of the present invention renders the role operation object associated with the second role object in the corresponding window bit is not limited to that shown in FIG. 9.
  • the functions of the processing units in the terminal of the embodiment of the present invention can be understood by referring to the related description of the information processing method, and the processing units in the information processing terminal according to the embodiments of the present invention can implement the present invention.
  • the implementation of the analog circuit of the functions described in the embodiments can also be implemented by running the software of the functions described in the embodiments of the present invention on the smart terminal.
  • the rendering processing unit 61, the deployment unit 62, the detecting unit 63, the operation executing unit 64, and the obtaining unit 65 in the terminal may be used by the terminal in the actual application.
  • the communication unit 66 in the terminal may be used by a transceiver antenna or communication in the terminal in practical applications. Interface implementation.
  • the embodiment of the invention further provides a terminal.
  • the terminal may be an electronic device such as a PC, and may also be a portable electronic device such as a tablet computer, a laptop computer, a smart phone, etc., and the game system is implemented on the terminal by installing a software application (such as a game application), the terminal.
  • a software application such as a game application
  • At least a memory for storing data and a processor for data processing are included.
  • the processor for data processing may be implemented by a microprocessor, a CPU, a DSP, or an FPGA when performing processing; and the operation instruction may be a computer executable code for the memory,
  • the steps in the flow of the information processing method of the embodiment of the present invention are implemented by the operation instruction.
  • the terminal includes: a processor 71 and a display 72; the processor 71 is configured to execute a software application and on the display 72. Rendering to obtain a graphical user interface, the processor 71, the graphical user interface and the software application being implemented on a gaming system;
  • the processor 71 is further configured to render at least one virtual resource object on the graphical user interface, at least one of the virtual resource objects being configured to perform a first virtual operation according to the input first user command First role object;
  • At least one character object deployed in at least one character selection area of the graphical user interface includes at least one window bit
  • the character operation object is rendered in at least one of the window bits
  • the first character object performs at least one of the first virtual operations on the corresponding second character object.
  • the processor 71 detects a second role object that meets a first preset condition in the graphical user interface and the first role object, and includes:
  • the processor 71 is further configured to detect, before detecting, a selection operation gesture for at least one of the character object objects that is rendered according to the first display parameter.
  • a second role object in which a distance between a character object meets a first preset condition, and a distance between the first character object and the first character object satisfies a second preset condition, according to the second display parameter Rendering a character operation object associated with the detected at least part of the second character object in the at least one of the window bits; wherein rendering according to the second display parameter and performing according to the first display parameter Rendering has different display effects.
  • the processor 71 detects a distance between at least a portion of the second character object and the first character object in the second character object that the distance between the first character object and the first character object meets the first preset condition Meet the second preset condition, including:
  • the second role object that reaches a second preset threshold value;
  • the second preset threshold is greater than or equal to the first preset threshold.
  • the terminal further includes a communication device 74;
  • the processor 71 is further configured to acquire state attribute information of the second role object in the graphical user interface, synchronize the state attribute information to the server by using the communication device 74, and The state attribute information of the role object associated with the role operation object in the role object is obtained in the server.
  • the processor 71 is further configured to render the role operation object associated with the second character object in the corresponding window position in a preset manner according to the obtained state attribute information.
  • the terminal in this embodiment includes: a processor 71, a display 72, a memory 73, an input device 76, a bus 75, and a communication device 74; the processor 71, the memory 73, the input device 76, the display 72, and the communication device 74 are both Connected via a bus 75 for transferring data between the processor 71, the memory 73, the display 72, and the communication device 74.
  • the input device 76 is mainly configured to obtain an input operation of a user, and the input device 76 may also be different when the terminals are different.
  • the input device 76 may be an input device 76 such as a mouse or a keyboard; when the terminal is a portable device such as a smart phone or a tablet computer, the input device 76 may be a touch device. Screen.
  • a computer storage medium is stored in the memory 73, and the computer storage medium stores computer executable instructions, and the computer executable instructions are used in the information processing method according to the embodiment of the present invention.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit;
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing storage device includes the following steps: the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or an optical disk.
  • optical disk A medium that can store program code.
  • the above-described integrated unit of the present invention may be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a standalone product.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disk.
  • the embodiment of the present invention associates a corner of a second character object that performs information interaction on the first character object by using a window bit in the character object of the character selection area in the graphical user interface.
  • the color operation object is rendered in the corresponding window bit
  • the character operation object associated with the second character object whose distance between the first character object and the first preset object meets the first preset condition is rendered according to the first display parameter.
  • the UI avatar associated with the second character object whose distance between the first character object and the first target object meets the first preset condition is rendered according to the first display parameter, so that the UI avatar has a display different from other UI avatars.
  • the effect is that when the selected target character operates the object, the user can select the target character object quickly and accurately by selecting the operation gesture of the character operation object based on the display effect of the difference, thereby greatly improving the user's interaction process. Operating experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种信息处理方法、终端(101,70)及计算机存储介质。通过在终端的处理器(71)上执行软件应用并在所述终端的显示器上(72)进行渲染,以得到图形用户界面,所述处理器(71)、图形用户界面和所述软件应用在游戏系统上被实施;所述方法包括:在所述图形用户界面上渲染出至少一个虚拟资源对象(201,301,401);部署于所述图形用户界面至少一角色选择区域(802,91)的至少一个角色器对象包括至少一个窗口位(202,302,402);检测到对所述角色器对象中至少一个角色操作对象(b11,911)的视野获取手势时,在所述图形用户界面上渲染出与至少一个所述角色操作对象(b11,911)相关联的虚拟镜头所捕获的视野图像。

Description

一种信息处理方法、终端和计算机存储介质 技术领域
本发明涉及信息处理技术,具体涉及一种信息处理方法、终端和计算机存储介质。
背景技术
随着互联网技术的飞速发展,以及大屏幕、超大屏幕智能终端的日益普及,智能终端处理器的处理能力也越来越强,从而衍生出很多在大屏幕或超大屏幕上基于人机交互实现操控的应用。基于人机交互实现操控的过程中,多个用户之间可以采集一对一、一对多、多对多等各种建立群组的形式运行不同的交互模式,以得到不同的交互结果。比如,在大屏幕或超大屏幕上渲染得到的图形用户界面(GUI,Graphical User Interface)中,将多个用户分成两个不同群组后,利用人机交互中的操控处理,可以进行不同群组件的信息交互,以及根据对信息交互的响应得到不同的交互结果;利用人机交互中的操控处理,还可以在同一个群组的群成员间进行信息交互,以及根据对信息交互的响应得到不同的交互结果。
现有技术中,在信息交互过程中,由于目标群成员的频繁移动,或者由于目标群成员的数量较多且频繁移动,导致在选定目标群成员时,需要通过多次的触发操作。这个触发操作过程时间较长,且准确度不高,不能满足信息交互过程中的快捷、精准的需求。对应这个问题,相关技术中,目前尚无有效解决方案。
发明内容
本发明实施例期望提供一种信息处理方法、终端和计算机存储介质, 能够在信息交互过程中快速、准确的选定目标对象,提升用户的体验。
为达到上述目的,本发明实施例的技术方案是这样实现的:
本发明实施例提供了一种信息处理方法,通过在终端的处理器上执行软件应用并在所述终端的显示器上进行渲染,以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏系统上被实施;所述方法包括:
在所述图形用户界面上渲染出至少一个虚拟资源对象,所述虚拟资源对象中的至少一个被配置为根据输入的第一用户命令而执行第一虚拟操作的第一角色对象;
部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;
检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象,依据第一显示参数将与检测到的与所述第二角色对象相关联的角色操作对象在至少一个所述窗口位中进行渲染;
检测到对所述角色器对象中至少一个依据所述第一显示参数渲染过的角色操作对象的选择操作手势时,所述第一角色对象对相应的第二角色对象执行所述第一虚拟操作中的至少一种。
本发明实施例还提供了一种终端,所述终端包括:渲染处理单元、部署单元、检测单元和操作执行单元;其中,
所述渲染处理单元,配置为执行软件应用并进行渲染得到图形用户界面;在所述图形用户界面上渲染出至少一个虚拟资源对象,所述虚拟资源对象中的至少一个被配置为根据输入的第一用户命令而执行第一虚拟操作的第一角色对象;还配置为依据第一显示参数将与所述检测单元检测到的与所述第二角色对象相关联的角色操作对象在至少一个所述窗口位中进行渲染;
所述部署单元,配置为部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;
所述检测单元,配置为检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象;
所述操作执行单元,配置为检测到对所述角色器对象中至少一个依据所述第一显示参数渲染过的角色操作对象的选择操作手势时,所述第一角色对象对相应的第二角色对象执行所述第一虚拟操作中的至少一种。
本发明实施例还提供了一种终端,所述终端包括:处理器和显示器;所述处理器,配置为执行软件应用并在所述显示器上进行渲染以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏系统上被实施;
所述处理器,还配置为在所述图形用户界面上渲染出至少一个虚拟资源对象,所述虚拟资源对象中的至少一个被配置为根据输入的第一用户命令而执行第一虚拟操作的第一角色对象;
部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;
检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象,依据第一显示参数将与检测到的与所述第二角色对象相关联的角色操作对象在至少一个所述窗口位中进行渲染;
检测到对所述角色器对象中至少一个依据所述第一显示参数渲染过的角色操作对象的选择操作手势时,所述第一角色对象对相应的第二角色对象执行所述第一虚拟操作中的至少一种。
本发明实施例还提供了一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行本发明实施例所述的信息处理方法。
本发明实施例的信息处理方法、终端和计算机存储介质,通过部署在图形用户界面中的角色选择区域的角色器对象中的窗口位,对第一角色对象进行信息交互的第二角色对象相关联的角色操作对象在对应的窗口位中进行渲染,并且将与所述第一角色对象之间的距离满足第一预设条件的第二角色对象相关联的角色操作对象按第一显示参数进行渲染,也即将与所述第一角色对象之间的距离满足第一预设条件的第二角色对象相关联的UI头像按第一显示参数进行渲染,使得所述UI头像具有区别于其他UI头像的显示效果,便于用户在选定目标角色操作对象时,能够基于该区别的显示效果通过对所述角色操作对象的选择操作手势快速、精准的选定目标角色对象,大大提升了用户在交互过程中的操作体验。
附图说明
图1为本发明实施例的信息处理方法进行信息交互的应用架构示意图;
图2为本发明实施例一的信息处理方法的流程示意图;
图3为本发明实施例的信息处理方法的图形用户界面的第一种示意图;
图4为本发明实施例的信息处理方法中与第一角色对象之间的距离满足第一预设条件的第二角色对象的检测原理示意图;
图5为本发明实施例的信息处理方法的图形用户界面的第二种示意图;
图6为本发明实施例二的信息处理方法的流程示意图;
图7为本发明实施例的信息处理方法中与第一角色对象之间的距离满足第二预设条件的第二角色对象的检测原理示意图;
图8为本发明实施例三的信息处理方法的流程示意图;
图9为本发明实施例的信息处理方法的图形用户界面的第三种示意图;
图10为本发明实施例的信息处理方法的交互应用示意图;
图11为本发明实施例的信息处理方法中的图形用户界面的第四种示意图;
图12为本发明实施例四的终端的组成结构示意图;
图13为本发明实施例五的终端的组成结构示意图;
图14为本发明实施例六的终端的组成结构示意图。
具体实施方式
下面结合附图及具体实施例对本发明作进一步详细的说明。
图1为本发明实施例的信息处理方法进行信息交互的应用架构示意图;如图1所示,所述应用架构包括:服务器101以及至少一个终端,在本应用架构示意中,所述终端包括:终端102、终端103、终端104、终端105和终端106,其中,所述至少一个终端可通过网络100(如有线网络或者无线网络)与所述服务器101建立连接。具体的,所述终端包括手机、台式机、PC机、一体机等类型。
本实施例中,所述终端的处理器能够执行软件应用并在所述终端的显示器上进行渲染以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏系统上被实施。在本实施例中,在所述处理器、图形用户界面和所述软件应用在游戏系统上被实施的过程中,所述至少一个终端可通过有线网络或者无线网络与所述服务器101进行信息交互,以实现所述游戏系统中的1对1或多对多(例如3对3、5对5)的应用模式场景。其中,所述1对1的应用场景可以为一终端渲染得到的图形用户对象中的虚拟资源对象与所述游戏系统中预先设置的虚拟资源对象的信息交互(可以理解为人机对战),即所述终端与所述服务器进行的信息交互;所述1对1的应用场景还可以为一终端渲染得到的图形用户对象中的虚拟资源对象与另一终端渲染得到的图形用户对象中的虚拟资源对象的信息交互,例如终端102渲染得到的图形用户对象中的虚拟资源对象与终端103渲染得到的图形用户对象中的虚拟资源对象的信息交互。所述多对多的应用模式场景,以3对3的应用模式场景为例,终端1、终端2和终端3分别渲染得到的图 形用户对象中的虚拟资源对象组成第一群组,终端4、终端5和终端6分别渲染得到的图形用户对象中的虚拟资源对象组成第二群组,所述第一群组的群成员和所述第二群组的群成员之间进行的信息交互。
上述图1的例子只是实现本发明实施例的一个应用架构实例,本发明实施例并不限于上述图1所述的应用结构,基于该应用架构,提出本发明各个实施例。
实施例一
本发明实施例提供了一种信息处理方法。图2为本发明实施例一的信息处理方法的流程示意图。所述信息处理方法应用于终端中,通过在终端的处理器上执行软件应用并在所述终端的显示器上进行渲染,以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏系统上被实施;如图2所示,所述方法包括:
步骤201:在所述图形用户界面上渲染出至少一个虚拟资源对象,所述虚拟资源对象中的至少一个被配置为根据输入的第一用户命令而执行第一虚拟操作的第一角色对象。
步骤202:部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位。
步骤203:检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象,依据第一显示参数将与检测到的与所述第二角色对象相关联的角色操作对象在至少一个所述窗口位中进行渲染。
步骤204:检测到对所述角色器对象中至少一个依据所述第一显示参数渲染过的角色操作对象的选择操作手势时,所述第一角色对象对相应的第二角色对象执行所述第一虚拟操作中的至少一种。
本实施例中,所述图形用户界面包括至少一角色选择区域,所述角色选择区域包括至少一个角色器对象,所述角色器对象中包括至少一个窗口 位,其中至少部分窗口位承载对应的角色操作对象,所述角色操作对象在所述图形用户界面中可通过所述角色操作对象相关联的角色对象的标识(所述标识可以为头像)表示;这里,所述角色操作对象相关联的第二角色对象与所述第一角色对象属于不同的群组。其中,所述角色器对象在所述角色选择区域中的渲染方式包括但不限于条状、环状,即所述述角色器对象可通过角色选择条对象或角色选择盘对象表征。
图3为本发明实施例的信息处理方法的图形用户界面的第一种示意图;如图3所示,在所述终端的显示器上渲染得到的图形用户界面中包括至少一个虚拟资源对象;其中,所述虚拟资源对象中包括至少一个第一角色对象a10,所述终端的使用者可通过所述图形用户界面进行信息交互,即输入用户命令;所述第一角色对象a10能够基于所述终端检测到的第一用户命令进行第一虚拟操作;所述第一虚拟操作包括但不限于:移动操作、物理攻击操作、技能攻击操作等等。可以理解为,所述第一角色对象a10为所述终端的使用者操控的角色对象;在游戏系统中,所述第一角色对象a10能够基于所述使用者的操作在所述图形用户界面中执行相应的动作。作为一种实施方式,所述图形用户界面中还包括所述用户角色对象所在的虚拟区域的小地图801;所述小地图801的细节方法示意图如801a所示,可以看出,每个角色对象(包括与所述第一角色对象a10属于第一群组的友方和属于第二群组的敌方)在所述虚拟区域的位置均在所述小地图801中标识。所述图形用户界面中还包括至少一个技能对象803,用户可通过技能释放操作控制所述用户角色对象执行相应的技能释放操作。
在图3所示的示意中,所述图形用户界面具有一角色选择区域802;所述角色选择区域802中部署有角色器对象,在本示意中,所述角色器对象通过角色选择条对象表征(即所述角色器对象呈现条状显示效果)。所述角色器对象包括至少一个窗口位,与所述第一角色对象进行交互的第二角色 对象相关联的角色操作对象在对应的窗口位中进行渲染;以所述角色操作对象通过头像表征为例,即所述角色选择区域802中包括至少一个头像,所述至少一个头像分别与所述第一角色对象进行交互的至少一个第二角色对象一一对应。如图3所示,本示意中为5对5的应用场景,与所述第一角色对象a10属于不同群组的第二角色对象包括五个,在所述角色选择区域802中相应的包括五个角色操作对象,如图3中所示的角色操作对象b11、角色操作对象b12、角色操作对象b13、角色操作对象b14和角色操作对象b15;可以理解为,所述角色选择区域802中的五个角色操作对象与所述第一角色对象属于不同群组的五个第二角色对象一一对应。
基于图3所示的图形用户界面示意,在终端的使用者的操控过程中,所述第一角色对象的位置实时发生变化,相应的,所述图形用户界面中的第二角色对象的位置实时发生变化,因此,在所述第一角色对象对第二角色对象执行虚拟操作的过程中,所述终端的使用者不容易选定待进行虚拟操作的角色对象。基于此,在本实施例中,检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象。
具体的,所述检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象,包括:检测所述图形用户界面中与所述第一角色对象之间的距离小于第一预设阈值的第二角色对象。
图4为本发明实施例的信息处理方法中与第一角色对象之间的距离满足第一预设条件的第二角色对象的检测原理示意图;如图4所示,检测以所述第一角色对象1为圆心、以所述第一预设阈值(R)为半径的圆形区域,获取到所述圆形区域所包含的区域范围,所述区域范围可以通过坐标范围表示;即在所述第一角色对象和所述第二角色对象所在的虚拟空间中建立XY坐标系,获取所述圆形区域在所述XY坐标系中的坐标范围;进一步地,实时检测所述图形用户界面中的第二角色对象的坐标,判断检测到的坐标 是否在表征所述圆形区域的坐标范围中;当判定有坐标落在表征所述图形区域的坐标范围中时(如图4中所示的第二角色对象2、第二角色对象3和第二角色对象4均在所述圆形区域中),则检测到所述图形用户界面中与所述第一角色对象之间的距离小于第一预设阈值的第二角色对象。其中,所述第一预设阈值满足所述第一角色对象的攻击距离或者技能释放距离,以便于后续操作过程中,所述第二角色对象能够被快速选定并被所述第一角色对象执行虚拟操作。
检测到所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象后,确定与所述第二角色对象相关联的所述角色器对象中的对应的角色操作对象,依据所述第一显示参数将所述角色操作对象在对应的窗口位中进行渲染。图5为本发明实施例的信息处理方法的图形用户界面的第二种示意图;如图5所示,满足第一预设条件的第二角色对象相关联的角色操作对象通过所述第一显示参数进行渲染(参见图5中所示的角色操作对象b12,所述角色操作对象b12的外环边缘具有区别于其他角色操作对象的渲染效果,使之具有高亮的显示效果);相比于其他的角色操作对象所述通过所述第一显示参数进行渲染的角色操作对象(如角色操作对象b12)具有明显的区别特征,便于终端使用者能够马上识别出所述角色操作对象,从而便于所述终端使用者在后续操作中能够快速对这种具有明显区别特征的角色操作对象进行选定。
本实施例中,针对所述图形用户界面中的角色条选择对象中的按所述第一显示参数渲染的至少一个角色操作对象,当所述终端使用者对所述至少一个角色操作对象进行触发操作,也即所述终端检测到针对所述至少一个角色操作对象的选择操作手势时,表明选定所述角色操作对象相关联的第二角色对象;进一步所述第一角色对象对所述第二角色对象执行第一虚拟操作。具体的,在游戏系统中,所述第一虚拟操作可以为物理攻击操作, 也可以为技能释放操作。当所述第一虚拟操作为物理攻击操作时,在选定第二角色对象相关联的角色操作对象后,所述第一角色对象直接针对所述第二角色对象执行物理攻击操作。当待执行技能释放操作时,需要先通过技能选择操作手势选定技能对象;待选定第二角色对象相关联的角色操作对象后,所述第一角色对象针对所述第二角色对象执行所述技能对象的技能释放操作。
采用本发明实施例的技术方案,通过部署在图形用户界面中的角色选择区域的角色器对象中的窗口位,对第一角色对象进行信息交互的第二角色对象相关联的角色操作对象在对应的窗口位中进行渲染,并且将与所述第一角色对象之间的距离满足第一预设条件的第二角色对象相关联的角色操作对象按第一显示参数进行渲染,也即将与所述第一角色对象之间的距离满足第一预设条件的第二角色对象相关联的UI头像按第一显示参数进行渲染,使得所述UI头像具有区别于其他UI头像的显示效果,便于用户在选定目标角色操作对象时,能够基于该区别的显示效果通过对所述角色操作对象的选择操作手势快速、精准的选定目标角色对象,大大提升了用户在交互过程中的操作体验。
实施例二
基于实施例一,本发明实施例还提供了一种信息处理方法。图6为本发明实施例二的信息处理方法的流程示意图。所述信息处理方法应用于终端中,通过在终端的处理器上执行软件应用并在所述终端的显示器上进行渲染,以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏系统上被实施;如图6所示,所述方法包括:
步骤301:在所述图形用户界面上渲染出至少一个虚拟资源对象,所述虚拟资源对象中的至少一个被配置为根据输入的第一用户命令而执行第一虚拟操作的第一角色对象。
步骤302:部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位。
本实施例中,所述图形用户界面包括至少一角色选择区域,所述角色选择区域包括至少一个角色器对象,所述角色器对象中包括至少一个窗口位,其中至少部分窗口位承载对应的角色操作对象,所述角色操作对象在所述图形用户界面中可通过所述角色操作对象相关联的角色对象的标识(所述标识可以为头像)表示;这里,所述角色操作对象相关联的第二角色对象与所述第一角色对象属于不同的群组。其中,所述角色器对象在所述角色选择区域中的渲染方式包括但不限于条状、环状,即所述述角色器对象可通过角色选择条对象或角色选择盘对象表征。
具体如图3所示,在所述终端的显示器上渲染得到的图形用户界面中包括至少一个虚拟资源对象;其中,所述虚拟资源对象中包括至少一个第一角色对象a10,所述终端的使用者可通过所述图形用户界面进行信息交互,即输入用户命令;所述第一角色对象a10能够基于所述终端检测到的第一用户命令进行第一虚拟操作;所述第一虚拟操作包括但不限于:移动操作、物理攻击操作、技能攻击操作等等。可以理解为,所述第一角色对象a10为所述终端的使用者操控的角色对象;在游戏系统中,所述第一角色对象a10能够基于所述使用者的操作在所述图形用户界面中执行相应的动作。作为一种实施方式,所述图形用户界面中还包括所述用户角色对象所在的虚拟区域的小地图801;所述小地图801的细节方法示意图如801a所示,可以看出,每个角色对象(包括与所述第一角色对象a10属于第一群组的友方和属于第二群组的敌方)在所述虚拟区域的位置均在所述小地图801中标识。所述图形用户界面中还包括至少一个技能对象803,用户可通过技能释放操作控制所述用户角色对象执行相应的技能释放操作。
在图3所示的示意中,所述图形用户界面具有一角色选择区域802;所 述角色选择区域802中部署有角色器对象,在本示意中,所述角色器对象通过角色选择条对象表征(即所述角色器对象呈现条状显示效果)。所述角色器对象包括至少一个窗口位,与所述第一角色对象进行交互的第二角色对象相关联的角色操作对象在对应的窗口位中进行渲染,;以所述角色操作对象通过头像表征为例,即所述角色选择区域802中包括至少一个头像,所述至少一个头像分别与所述第一角色对象进行交互的至少一个第二角色对象一一对应。如图3所示,本示意中为5对5的应用场景,与所述第一角色对象a10属于不同群组的第二角色对象包括五个,在所述角色选择区域802中相应的包括五个角色操作对象,如图3中所示的角色操作对象b11、角色操作对象b12、角色操作对象b13、角色操作对象b14和角色操作对象b15;可以理解为,所述角色选择区域802中的五个角色操作对象与所述第一角色对象属于不同群组的五个第二角色对象一一对应。
基于图3所示的图形用户界面示意,在终端的使用者的操控过程中,所述第一角色对象的位置实时发生变化,相应的,所述图形用户界面中的第二角色对象的位置实时发生变化,基于此,在所述第一角色对象对第二角色对象执行虚拟操作的过程中,所述终端的使用者不容易选定待进行虚拟操作的角色对象。基于此,在本实施例中,检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象。
步骤303:检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象,依据第一显示参数将与检测到的与所述第二角色对象相关联的角色操作对象在至少一个所述窗口位中进行渲染。
这里,所述检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象,包括:检测所述图形用户界面中与所述第一角色对象之间的距离小于第一预设阈值的第二角色对象。参照图4所示,检测以所述第一角色对象1为圆心、以所述第一预设阈值(R)为半 径的圆形区域,获取到所述圆形区域所包含的区域范围,所述区域范围可以通过坐标范围表示;即在所述第一角色对象和所述第二角色对象所在的虚拟空间中建立XY坐标系,获取所述圆形区域在所述XY坐标系中的坐标范围;进一步地,实时检测所述图形用户界面中的第二角色对象的坐标,判断检测到的坐标是否在表征所述圆形区域的坐标范围中;当判定有坐标落在表征所述图形区域的坐标范围中时(如图4中所示的第二角色对象2、第二角色对象3和第二角色对象4均在所述圆形区域中),则检测到所述图形用户界面中与所述第一角色对象之间的距离小于第一预设阈值的第二角色对象。其中,所述第一预设阈值满足所述第一角色对象的攻击距离或者技能释放距离,以便于后续操作过程中,所述第二角色对象能够被快速选定并被所述第一角色对象执行虚拟操作。
步骤304:检测与所述第一角色对象之间的距离满足第一预设条件的第二角色对象中,与所述第一角色对象之间的距离满足第二预设条件的至少部分第二角色对象,依据第二显示参数将与检测到的至少部分第二角色对象相关联的角色操作对象在所述至少一个所述窗口位中进行渲染。
这里,所述检测与所述第一角色对象之间的距离满足第一预设条件的第二角色对象中,与所述第一角色对象之间的距离满足第二预设条件的至少部分第二角色对象,包括:
检测与所述第一角色对象之间的距离满足第一预设条件的第二角色对象中,与所述第一角色对象之间的距离达到第二预设阈值的第二角色对象;其中,所述第二预设阈值大于等于所述第一预设阈值。
图7为本发明实施例的信息处理方法中与第一角色对象之间的距离满足第二预设条件的第二角色对象的检测原理示意图;参照图4和图7所示,与所述第一角色对象之间的距离满足第一预设条件的第二角色对象中(如图4中所示的第二角色对象2、第二角色对象3和第二角色对象4均满足所 述第一预设条件),即在先坐标值在以第一预设阈值(R)为半径的圆形区域中的第二角色对象;由于第二角色对象在所述图形用户界面中的位置实时发生变化,基于此,在步骤305之前,即检测到对所述角色器对象中至少一个依据所述第一显示参数渲染过的角色操作对象的选择操作手势之前,实时检测在先坐标值在以第一预设阈值(R)为半径的圆形区域中的第二角色对象的坐标值,并判定所述坐标值是否在以所述第二预设阈值(图7中所示的r)为半径、所述第一角色对象为圆心的圆形区域中;在图7所示的示意中,所述第二预设阈值(r)大于所述第一预设阈值(R),即在先坐标值在以第一预设阈值为半径的圆形区域中的第二角色对象中,随着第一角色对象和第二角色对象的实时移动,有至少部分第二角色对象会移动至与所述第一角色对象之间的距离大于所述第一预设阈值(R)且达到所述第二预设阈值(r),如图7中所示的第二角色对象4,进一步解除所述至少部分第二角色对象相关联的角色操作对象能够被选定的操作状态,并且按第二显示参数将所述角色操作对象在对应的所述窗口位中进行渲染。作为一种实施方式,所述第二显示参数可以为常规显示参数,即所述图形用户界面中,除按所述第一显示参数显示的角色操作对象外,其余的虚拟资源对象均依据所述第二显示参数进行渲染。
步骤305:检测到对所述角色器对象中至少一个依据所述第一显示参数渲染过的角色操作对象的选择操作手势时,所述第一角色对象对相应的第二角色对象执行所述第一虚拟操作中的至少一种。
这里,检测到所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象后,确定与所述第二角色对象相关联的所述角色器对象中的对应的角色操作对象,依据所述第一显示参数将所述角色操作对象在对应的窗口位中进行渲染。参照图5所示,满足第一预设条件的第二角色对象相关联的角色操作对象通过所述第一显示参数进行渲染 (参见图5中所示的角色操作对象b12,所述角色操作对象b12的外环边缘具有区别于其他角色操作对象的渲染效果,使之具有高亮的显示效果);相比于其他的角色操作对象所述通过所述第一显示参数进行渲染的角色操作对象(如角色操作对象b12)具有明显的区别特征,便于终端使用者能够马上识别出所述角色操作对象,从而便于所述终端使用者在后续操作中能够快速对这种具有明显区别特征的角色操作对象进行选定。
本实施例中,针对所述图形用户界面中的角色条选择对象中的按所述第一显示参数渲染的至少一个角色操作对象,当所述终端使用者对所述至少一个角色操作对象进行触发操作,也即所述终端检测到针对所述至少一个角色操作对象的选择操作手势时,表明选定所述角色操作对象相关联的第二角色对象;进一步所述第一角色对象对所述第二角色对象执行第一虚拟操作。具体的,在游戏系统中,所述第一虚拟操作可以为物理攻击操作,也可以为技能释放操作。当所述第一虚拟操作为物理攻击操作时,在选定第二角色对象相关联的角色操作对象后,所述第一角色对象直接针对所述第二角色对象执行物理攻击操作。当待执行技能释放操作时,需要先通过技能选择操作手势选定技能对象;待选定第二角色对象相关联的角色操作对象后,所述第一角色对象针对所述第二角色对象执行所述技能对象的技能释放操作。
采用本发明实施例的技术方案,通过部署在图形用户界面中的角色选择区域的角色器对象中的窗口位,对第一角色对象进行信息交互的第二角色对象相关联的角色操作对象在对应的窗口位中进行渲染,并且将与所述第一角色对象之间的距离满足第一预设条件的第二角色对象相关联的角色操作对象按第一显示参数进行渲染,也即将与所述第一角色对象之间的距离满足第一预设条件的第二角色对象相关联的UI头像按第一显示参数进行渲染,使得所述UI头像具有区别于其他UI头像的显示效果,便于用户在 选定目标角色操作对象时,能够基于该区别的显示效果通过对所述角色操作对象的选择操作手势快速、精准的选定目标角色对象,大大提升了用户在交互过程中的操作体验。
实施例三
基于实施例一和实施例二,本发明实施例还提供了一种信息处理方法。图8为本发明实施例三的信息处理方法的流程示意图。所述信息处理方法应用于终端中,通过在终端的处理器上执行软件应用并在所述终端的显示器上进行渲染,以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏系统上被实施;如图8所示,所述方法包括:
步骤401:在所述图形用户界面上渲染出至少一个虚拟资源对象,所述虚拟资源对象中的至少一个被配置为根据输入的第一用户命令而执行第一虚拟操作的第一角色对象。
步骤402:部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位。
本实施例中,所述图形用户界面包括至少一角色选择区域,所述角色选择区域包括至少一个角色器对象,所述角色器对象中包括至少一个窗口位,其中至少部分窗口位承载对应的角色操作对象,所述角色操作对象在所述图形用户界面中可通过所述角色操作对象相关联的角色对象的标识(所述标识可以为头像)表示;这里,所述角色操作对象相关联的第二角色对象与所述第一角色对象属于不同的群组。其中,所述角色器对象在所述角色选择区域中的渲染方式包括但不限于条状、环状,即所述述角色器对象可通过角色选择条对象或角色选择盘对象表征。
具体如图3所示,在所述终端的显示器上渲染得到的图形用户界面中包括至少一个虚拟资源对象;其中,所述虚拟资源对象中包括至少一个第一角色对象a10,所述终端的使用者可通过所述图形用户界面进行信息交 互,即输入用户命令;所述第一角色对象a10能够基于所述终端检测到的第一用户命令进行第一虚拟操作;所述第一虚拟操作包括但不限于:移动操作、物理攻击操作、技能攻击操作等等。可以理解为,所述第一角色对象a10为所述终端的使用者操控的角色对象;在游戏系统中,所述第一角色对象a10能够基于所述使用者的操作在所述图形用户界面中执行相应的动作。作为一种实施方式,所述图形用户界面中还包括所述用户角色对象所在的虚拟区域的小地图801;所述小地图801的细节方法示意图如801a所示,可以看出,每个角色对象(包括与所述第一角色对象a10属于第一群组的友方和属于第二群组的敌方)在所述虚拟区域的位置均在所述小地图801中标识。所述图形用户界面中还包括至少一个技能对象803,用户可通过技能释放操作控制所述用户角色对象执行相应的技能释放操作。
在图3所示的示意中,所述图形用户界面具有一角色选择区域802;所述角色选择区域802中部署有角色器对象,在本示意中,所述角色器对象通过角色选择条对象表征(即所述角色器对象呈现条状显示效果)。所述角色器对象包括至少一个窗口位,与所述第一角色对象进行交互的第二角色对象相关联的角色操作对象在对应的窗口位中进行渲染;以所述角色操作对象通过头像表征为例,即所述角色选择区域802中包括至少一个头像,所述至少一个头像分别与所述第一角色对象进行交互的至少一个第二角色对象一一对应。如图3所示,本示意中为5对5的应用场景,与所述第一角色对象a10属于不同群组的第二角色对象包括五个,在所述角色选择区域802中相应的包括五个角色操作对象,如图3中所示的角色操作对象b11、角色操作对象b12、角色操作对象b13、角色操作对象b14和角色操作对象b15;可以理解为,所述角色选择区域802中的五个角色操作对象与所述第一角色对象属于不同群组的第二角色对象一一对应。
基于图3所示的图形用户界面示意,在终端的使用者的操控过程中, 所述第一角色对象的位置实时发生变化,相应的,所述图形用户界面中的第二角色对象的位置实时发生变化,基于此,在所述第一角色对象对第二角色对象执行虚拟操作的过程中,所述终端的使用者不容易选定待进行虚拟操作的角色对象。基于此,在本实施例中,检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象。
步骤403:检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象,依据第一显示参数将与检测到的与所述第二角色对象相关联的角色操作对象在至少一个所述窗口位中进行渲染。
这里,所述检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象,包括:检测所述图形用户界面中与所述第一角色对象之间的距离小于第一预设阈值的第二角色对象。参照图4所示,检测以所述第一角色对象1为圆心、以所述第一预设阈值(R)为半径的圆形区域,获取到所述圆形区域所包含的区域范围,所述区域范围可以通过坐标范围表示;即在所述第一角色对象和所述第二角色对象所在的虚拟空间中建立XY坐标系,获取所述圆形区域在所述XY坐标系中的坐标范围;进一步地,实时检测所述图形用户界面中的第二角色对象的坐标,判断检测到的坐标是否在表征所述圆形区域的坐标范围中;当判定有坐标落在表征所述图形区域的坐标范围中时(如图4中所示的第二角色对象2、第二角色对象3和第二角色对象4均在所述圆形区域中),则检测到所述图形用户界面中与所述第一角色对象之间的距离小于第一预设阈值的第二角色对象。其中,所述第一预设阈值满足所述第一角色对象的攻击距离或者技能释放距离,以便于后续操作过程中,所述第二角色对象能够被快速选定并被所述第一角色对象执行虚拟操作。
步骤404:检测到对所述角色器对象中至少一个依据所述第一显示参数渲染过的角色操作对象的选择操作手势时,所述第一角色对象对相应的第 二角色对象执行所述第一虚拟操作中的至少一种。
这里,检测到所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象后,确定与所述第二角色对象相关联的所述角色器对象中的对应的角色操作对象,依据所述第一显示参数将所述角色操作对象在对应的窗口位中进行渲染。参照图5所示,满足第一预设条件的第二角色对象相关联的角色操作对象通过所述第一显示参数进行渲染(参见图5中所示的角色操作对象b12,所述角色操作对象b12的外环边缘具有区别于其他角色操作对象的渲染效果,使之具有高亮的显示效果);相比于其他的角色操作对象所述通过所述第一显示参数进行渲染的角色操作对象(如角色操作对象b12)具有明显的区别特征,便于终端使用者能够马上识别出所述角色操作对象,从而便于所述终端使用者在后续操作中能够快速对这种具有明显区别特征的角色操作对象进行选定。
本实施例中,针对所述图形用户界面中的角色条选择对象中的按所述第一显示参数渲染的至少一个角色操作对象,当所述终端使用者对所述至少一个角色操作对象进行触发操作,也即所述终端检测到针对所述至少一个角色操作对象的选择操作手势时,表明选定所述角色操作对象相关联的第二角色对象;进一步所述第一角色对象对所述第二角色对象执行第一虚拟操作。具体的,在游戏系统中,所述第一虚拟操作可以为物理攻击操作,也可以为技能释放操作。当所述第一虚拟操作为物理攻击操作时,在选定第二角色对象相关联的角色操作对象后,所述第一角色对象直接针对所述第二角色对象执行物理攻击操作。当待执行技能释放操作时,需要先通过技能选择操作手势选定技能对象;待选定第二角色对象相关联的角色操作对象后,所述第一角色对象针对所述第二角色对象执行所述技能对象的技能释放操作。
步骤405:获取所述图形用户界面中的第二角色对象的状态属性信息, 同步所述状态属性信息至服务器;以及从所述服务器中获得所述角色器对象中的角色操作对象相关联的角色对象的状态属性信息。
这里,所述终端获取所述图形用户界面中的第二角色对象的状态属性信息;由于所述第一角色对象和所述第二角色对象所在的虚拟空间基于所述软件应用的设置可以相对较大,则在所述终端渲染得到的所述图形用户界面所显示的事业图像中,包括所述第一角色对象,可能包括所述第二角色对象,当然,也可能不包括所述第二角色对象。本实施例中,所述终端获得在所述图形用户界面中包含的第二角色对象的状态属性信息,以及将所述状态属性信息关联对应的第二角色对象同步至服务器。其中,所述状态属性信息包括但不限于所述第二角色对象的血量值、生命值或技能属性信息。
本实施例中,所述终端按预设规则从所述服务器中获得所述角色器对象中的角色操作对象相关联的角色对象的状态属性信息,以便当所述终端自身的图形用户界面中不包含至少部分第二角色对象从而无法获得所述至少部分第二角色对象的状态属性信息时,可以通过其他终端同步至所述服务器中的第二角色对象及关联的状态属性信息,从而获得所述角色器对象中包括的所有角色操作对象相关联的第二角色对象的状态属性信息。其中,所述终端和所述其他终端属于同一群组,可以理解为,所述终端进行操控的第一角色对象和所述其他终端进行操控的第一角色对象在所述游戏系统中属于同一群组,共同针对同属于另一群组的第二角色对象进行虚拟操作。在所述终端的图形用户界面中不包括所有的第二角色对象时,所述其他终端的图形用户界面中可能包括至少部分第二角色对象,从而基于属于同一群组的至少一个终端对自身的图形用户界面中包括的第二角色对象的状态属性信息的获取,实现第二角色对象的状态属性信息的互相同步。
步骤406:依据获得的所述状态属性信息按预设方式将所述第二角色对 象相关联的角色操作对象在对应的所述窗口位中进行渲染。
这里,通过自身获取以及服务器同步的第二角色对象的状态属性信息后,在所述角色器对象中按预设方式将所述第二角色对象相关联的角色操作对象在对应的所述窗口位中进行渲染。具体的,图9为本发明实施例的信息处理方法的图形用户界面的第三种示意图;如图9所示,以所述状态属性信息为血量值为例,在第二角色对象相关联的角色操作对象(参见图9中角色操作对象b22所示)的外环区域作为血槽显示区域b221,通过所述血槽显示区域b221中血量在所述血槽显示区域中的比例关系表征对应的第二角色对象当前的血量值。当然,本发明实施例中所述状态属性信息将所述第二角色对象相关联的角色操作对象在对应的所述窗口位中进行渲染的方式不限于图9中所示。
其中,所述终端和所述其他终端的图形用户界面中所包含的第二角色对象可能不包括与之交互的属于另一群组的所有第二角色对象。以5对5的应用场景为例,属于第一群组的群成员包括:群成员1、群成员2、群成员3、群成员4和群成员5;属于第二群组的群成员包括:群成员6、群成员7、群成员8、群成员9和群成员10。假设所述终端操控群成员1,在所述终端的图形用户界面的视野图像中仅包括群成员6;在属于所述第一群组的群成员所在的其他终端的图形用户界面的视野图像中包括群成员7、群成员8和群成员9,则所述群成员10不存在属于第一群组的群成员所操控的任何终端的图形用户界面的视野图像中。基于此,如图9所示,角色操作对象b21呈现与其他操作对象不同的显示效果,具体为灰色效果显示,表明所述角色操作对象b21对应的第二角色对象不在与所述第一角色对象a10的视野图像中,同时也不在与所述第一角色对象a10属于同一群组的其他角色对象的视野图像中;相应的,所述角色操作对象b21的外环区域不显示所述角色操作对象b21相关联的第二角色对象的状态属性信息。
采用本发明实施例的技术方案,一方面通过部署在图形用户界面中的角色选择区域的角色器对象中的窗口位,对第一角色对象进行信息交互的第二角色对象相关联的角色操作对象在对应的窗口位中进行渲染,并且将与所述第一角色对象之间的距离满足第一预设条件的第二角色对象相关联的角色操作对象按第一显示参数进行渲染,也即将与所述第一角色对象之间的距离满足第一预设条件的第二角色对象相关联的UI头像按第一显示参数进行渲染,使得所述UI头像具有区别于其他UI头像的显示效果,便于用户在选定目标角色操作对象时,能够基于该区别的显示效果通过对所述角色操作对象的选择操作手势快速、精准的选定目标角色对象,大大提升了用户在交互过程中的操作体验。另一方面,通过同步属于同一群组的角色对象(即队友)的视野图像内的第二角色对象的状态属性信息的方式获得角色器对象中的角色操作对象相关联的第二角色对象的状态属性信息,并将所述状态属性信息通过特定方式渲染在对应的窗口位,也即将第二角色对象(即敌方)的状态属性信息反映在对应的角色操作对象(UI头像)上,便于用户能够快速获知第二角色对象(即敌方)的状态属性信息,在信息交互过程中提升了用户的操作体验。
基于实施例一至实施例三的方法实施例,下面以1对1的应用场景为例进行详细说明。其中,所述1对1的应用场景为终端1操控的第一角色对象和终端2操控的第二角色对象进行信息交互的应用场景,其余应用场景可参照本应用场景的描述,本实施例中不做赘述。图10为本发明实施例的信息处理方法的交互应用示意图;如图10所示,在本应用场景中,包括终端1、终端2和服务器,所述终端1通过用户1进行触发操控;所述终端2通过用户2进行触发操控;所述方法包括:
对于用户1,步骤11:用户1通过触发游戏系统并登陆身份验证信息,所述身份验证信息可以为用户名和密码。
步骤12:所述终端1将获取到的身份验证信息传输至服务器3,由所述服务器3进行身份验证,并在身份验证通过后,返回第一图形用户界面至所述终端1;其中,所述第一图形用户界面中包括第一角色对象,所述第一角色对象能够基于用户1的触发操作执行虚拟操作,所述虚拟操作包括所述第一角色对象的移动操作、所述第一角色对象针对其他角色对象的攻击操作或技能释放操作等等。
对于用户2,步骤21:用户2通过触发游戏系统并登陆身份验证信息,所述身份验证信息可以为用户名和密码。
步骤22:所述终端2将获取到的身份验证信息传输至服务器3,由所述服务器3进行身份验证,并在身份验证通过后,返回第二图形用户界面至所述终端2;其中,所述第二图形用户界面中包括第二角色对象,所述第二角色对象能够基于用户2的触发操作执行虚拟操作,所述虚拟操作包括所述第二角色对象的移动操作、所述第二角色对象针对其他角色对象的攻击操作或技能释放操作等等。
本实施例中,在所述用户1和所述用户2基于触发操作,使得所述第一角色对象和所述第二角色对象作为信息交互的对象时,也即所述第一角色对象将所述第二角色对象作为目标交互对象,相应的,所述第二角色对象将所述第一角色对象作为目标交互对象,也即可以理解为,所述用户1和所述用户2作为游戏对战的操控双方时,所述第一图形用户界面的角色选择区域的角色器对象的窗口位渲染所述第二角色对象相关联的角色操作对象;相应的,所述第二图形用户界面的角色选择区域的角色器对象的窗口位渲染所述第一角色对象相关联的角色操作对象。进一步地,所述终端1实时检测所述第二角色对象与所述第一角色对象的距离,当所述距离在第一预设阈值范围内时,依据第一显示参数将所述第二角色对象相关联的角色操作对象在所述窗口位中进行渲染,即突出显示所述角色操作对象;相 应的,所述终端2实时检测所述第一角色对象与所述第二角色对象的距离,当所述距离在第一预设阈值范围内时,依据第一显示参数将所述第一角色对象相关联的角色操作对象在所述窗口位中进行渲染,即突出显示所述角色操作对象。
至此,完成所述用户1和所述用户2的游戏系统的登录操作以及初始化操作。
对于用户1,步骤13,用户对所述终端1呈现的第一图形用户界面进行触发操作,所述触发操作可针对所述第一图形用户界面中的任何虚拟资源对象,包括针对任何技能对象的技能释放操作、针对任何角色对象的信息交互操作(可以理解为物理攻击操作)、所述第一角色对象的移动操作等等。在本实施例中,所述触发操作为针对所述第一图形用户界面的角色器对象中依据所述第一显示参数渲染过的角色操作对象的选择手势操作。
步骤14,所述终端1获取到触发操作时,识别所述触发操作手势对应的指令,执行所述指令;例如执行对相应操作对象的技能释放指令、执行针对相应角色对象的信息交互指令(如物理攻击指令)、执行移动指令等等。并且,在执行指令的过程中,记录相应数据的改变。本实施例中,所述终端1获取到依据所述第一显示参数渲染过的角色操作对象的选择手势操作时,生成对应的第一指令,执行所述第一指令,以控制所述第一角色对象对相应的第二角色对象执行虚拟操作(如物理攻击操作或技能释放操作)。
步骤15,将改变后的数据作为与所述终端1对应的第一数据同步至服务器3。
对于用户2,步骤23,用户对所述终端2呈现的第二图形用户界面进行触发操作,所述触发操作可针对所述第二图形用户界面中的任何虚拟资源对象,包括针对任何技能对象的技能释放操作、针对任何角色对象的信息交互操作(可以理解为物理攻击操作)、所述第二角色对象的移动操作等 等。在本实施例中,所述触发操作为针对所述第二图形用户界面的角色器对象中依据所述第一显示参数渲染过的角色操作对象的选择手势操作。
步骤24,所述终端2获取到触发操作时,识别所述触发操作手势对应的指令,执行所述指令;例如执行对相应操作对象的技能释放指令、执行针对相应角色对象的信息交互指令(如物理攻击指令)、执行移动指令等等。并且,在执行指令的过程中,记录相应数据的改变。本实施例中,所述终端2获取到依据所述第一显示参数渲染过的角色操作对象的选择手势操作时,生成对应的第二指令,执行所述第二指令,以控制所述第二角色对象对相应的第一角色对象执行虚拟操作(如物理攻击操作或技能释放操作)。
步骤25,将改变后的数据作为与所述终端2对应的第二数据同步至服务器3。
对于服务器3,步骤30,基于终端1同步的第一数据和终端2同步的第二数据进行数据更新,并将更新后的数据分别同步至所述终端1和所述终端2。
参照上述方法实施例的描述,以一个现实应用场景为例对本发明实施例阐述如下:本应用场景涉及多人在线战术竞技游戏(MOBA,Multiplayer Online Battle Arena Games)。在MOBA中涉及的技术名词为:1)UI层:即图形用户界面中的图标;2)技能指示器:用来辅助技能释放的特效、光圈、操作;3)虚拟镜头:可以理解成为游戏里的摄像机;4)小地图:大地图的缩小版,可以理解成为雷达图,地图上会显示敌人我方的信息和位置。
图11为本发明实施例的信息处理方法中的图形用户界面的第四种示意图;本示意基于实际交互过程的应用场景。参照图11所示,本实施例中渲染得到的第一角色对象93以及至少一技能对象92;所述第一角色对象93能够基于用户的触发操作执行相应的虚拟操作。所述图形用户界面90中还包括角色选择区域91,所述角色选择区域91包括角色器对象;所述角色器 对象在本示意中包括五个窗口位,每个窗口位分别渲染出一角色操作对象,包括角色操作对象911、角色操作对象912、角色操作对象913、角色操作对象914和角色操作对象915;每个角色操作对象分别与一角色对象相关联;五个角色对象均为与所述第一角色对象93属于不同群组的角色对象,即五个角色对象作为所述第一角色对象93的敌方进行交互。在本应用场景中,所述第一角色对象93实时检测所述图形用户界面90中与所述第一角色对象93之间的距离满足第一预设阈值的第二角色对象,将对应的第二角色对象相关联的角色操作对象按第一显示参数进行渲染;其中,所述第一预设阈值可根据实际需要设定为技能对象的技能释放距离,当然不限于上述设定方式。例如图10中所示的角色操作对象913,相比于其他角色操作对象具有高亮的显示效果。基于此,用户能够基于该区别的显示效果通过对所述角色操作对象的选择操作手势快速、精准的选定目标角色对象,从而对该目标角色对象进行虚拟操作,大大提升了用户在交互过程中的操作体验。
实施例四
本发明实施例还提供了一种终端。图12为本发明实施例四的终端的组成结构示意图;如图12所示,所述终端包括:渲染处理单元61、部署单元62、检测单元63和操作执行单元64;其中,
所述渲染处理单元61,配置为执行软件应用并进行渲染得到图形用户界面;在所述图形用户界面上渲染出至少一个虚拟资源对象,所述虚拟资源对象中的至少一个被配置为根据输入的第一用户命令而执行第一虚拟操作的第一角色对象;还配置为依据第一显示参数将与所述检测单元63检测到的与所述第二角色对象相关联的角色操作对象在至少一个所述窗口位中进行渲染;
所述部署单元62,配置为部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;
所述检测单元63,配置为检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象;
所述操作执行单元64,配置为检测到对所述角色器对象中至少一个依据所述第一显示参数渲染过的角色操作对象的选择操作手势时,所述第一角色对象对相应的第二角色对象执行所述第一虚拟操作中的至少一种。
本实施例中,所述图形用户界面包括至少一角色选择区域,所述角色选择区域包括至少一个角色器对象,所述角色器对象中包括至少一个窗口位,其中至少部分窗口位承载对应的角色操作对象,所述角色操作对象在所述图形用户界面中可通过所述角色操作对象相关联的角色对象的标识(所述标识可以为头像)表示;这里,所述角色操作对象相关联的第二角色对象与所述第一角色对象属于不同的群组。其中,所述角色器对象在所述角色选择区域中的渲染方式包括但不限于条状、环状,即所述述角色器对象可通过角色选择条对象或角色选择盘对象表征。
具体的,参照图3所示,所述渲染处理单元61渲染得到的图形用户界面中包括至少一个虚拟资源对象;其中,所述虚拟资源对象中包括至少一个第一角色对象a10,所述终端的使用者可通过所述图形用户界面进行信息交互,即输入用户命令;所述第一角色对象a10能够基于所述终端检测到的第一用户命令进行第一虚拟操作;所述第一虚拟操作包括但不限于:移动操作、物理攻击操作、技能攻击操作等等。可以理解为,所述第一角色对象a10为所述终端的使用者操控的角色对象;在游戏系统中,所述第一角色对象a10能够基于所述使用者的操作在所述图形用户界面中执行相应的动作。作为一种实施方式,所述图形用户界面中还包括所述用户角色对象所在的虚拟区域的小地图801;所述小地图801的细节方法示意图如801a所示,可以看出,每个角色对象(包括与所述第一角色对象a10属于第一群组的友方和属于第二群组的敌方)在所述虚拟区域的位置均在所述小地 图801中标识。所述图形用户界面中还包括至少一个技能对象803,用户可通过技能释放操作控制所述用户角色对象执行相应的技能释放操作。
在图3所示的示意中,所述部署单元62在所述图形用户界面部署一角色选择区域802;所述角色选择区域802中部署有角色器对象,在本示意中,所述角色器对象通过角色选择条对象表征(即所述角色器对象呈现条状显示效果)。所述角色器对象包括至少一个窗口位,与所述第一角色对象进行交互的第二角色对象相关联的角色操作对象在对应的窗口位中进行渲染,以所述角色操作对象通过头像表征为例,即所述角色选择区域802中包括至少一个头像,所述至少一个头像分别与所述第一角色对象进行交互的至少一个第二角色对象一一对应。如图3所示,本示意中为5对5的应用场景,与所述第一角色对象a10属于不同群组的第二角色对象包括五个,在所述角色选择区域802中相应的包括五个角色操作对象,如图3中所示的角色操作对象b11、角色操作对象b12、角色操作对象b13、角色操作对象b14和角色操作对象b15;可以理解为,所述角色选择区域802中的五个角色操作对象与所述第一角色对象属于不同群组的第二角色对象一一对应。
基于图3所示的图形用户界面示意,在终端的使用者的操控过程中,所述第一角色对象的位置实时发生变化,相应的,所述图形用户界面中的第二角色对象的位置实时发生变化,基于此,在所述第一角色对象对第二角色对象执行虚拟操作的过程中,所述终端的使用者不容易选定待进行虚拟操作的角色对象。基于此,在本实施例中,所述检测单元63检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象。
具体的,所述检测单元63,配置为检测所述图形用户界面中与所述第一角色对象之间的距离小于第一预设阈值的第二角色对象。
参照图4所示,所述检测单元63检测以所述第一角色对象1为圆心、 以所述第一预设阈值(R)为半径的圆形区域,获取到所述圆形区域所包含的区域范围,所述区域范围可以通过坐标范围表示;即在所述第一角色对象和所述第二角色对象所在的虚拟空间中建立XY坐标系,获取所述圆形区域在所述XY坐标系中的坐标范围;进一步地,所述检测单元63实时检测所述图形用户界面中的第二角色对象的坐标,判断检测到的坐标是否在表征所述圆形区域的坐标范围中;当判定有坐标落在表征所述图形区域的坐标范围中时(如图4中所示的第二角色对象2、第二角色对象3和第二角色对象4均在所述圆形区域中),则检测到所述图形用户界面中与所述第一角色对象之间的距离小于第一预设阈值的第二角色对象。其中,所述第一预设阈值满足所述第一角色对象的攻击距离或者技能释放距离,以便于后续操作过程中,所述第二角色对象能够被快速选定并被所述第一角色对象执行虚拟操作。
作为一种实施方式,所述检测单元63,还配置为检测与所述第一角色对象之间的距离满足第一预设条件的第二角色对象中,与所述第一角色对象之间的距离满足第二预设条件的至少部分第二角色对象;
相应的,所述渲染处理单元61,配置为依据第二显示参数将与所述检测单元63检测到的至少部分第二角色对象相关联的角色操作对象在所述至少一个所述窗口位中进行渲染;其中,按所述第二显示参数进行渲染与按所述第一显示参数进行渲染的显示效果不同。
具体的,所述检测单元63,配置为检测与所述第一角色对象之间的距离满足第一预设条件的第二角色对象中,与所述第一角色对象之间的距离达到第二预设阈值的第二角色对象;其中,所述第二预设阈值大于等于所述第一预设阈值。
这里,参照图7所示,与所述第一角色对象之间的距离满足第一预设条件的第二角色对象中(如图4中所示的第二角色对象2、第二角色对象3 和第二角色对象4均满足所述第一预设条件),即在先坐标值在以第一预设阈值(R)为半径的圆形区域中的第二角色对象;由于第二角色对象在所述图形用户界面中的位置实时发生变化,基于此,所述检测单元63检测到对所述角色器对象中至少一个依据所述第一显示参数渲染过的角色操作对象的选择操作手势之前,实时检测在先坐标值在以第一预设阈值(R)为半径的圆形区域中的第二角色对象的坐标值,并判定所述坐标值是否在以所述第二预设阈值(图7中所示的r)为半径、所述第一角色对象为圆心的圆形区域中;在图7所示的示意中,所述第二预设阈值(r)大于所述第一预设阈值(R),即在先坐标值在以第一预设阈值为半径的圆形区域中的第二角色对象中,随着第一角色对象和第二角色对象的实时移动,有至少部分第二角色对象会移动至与所述第一角色对象之间的距离大于所述第一预设阈值(R)且达到所述第二预设阈值(r),如图7中所示的第二角色对象4,进一步解除所述至少部分第二角色对象相关联的角色操作对象能够被选定的操作状态,并且按第二显示参数将所述角色操作对象在对应的所述窗口位中进行渲染。作为一种实施方式,所述第二显示参数可以为常规显示参数,即所述图形用户界面中,除按所述第一显示参数显示的角色操作对象外,其余的虚拟资源对象均依据所述第二显示参数进行渲染。
作为一种实施方式,所述渲染处理单元61依据所述第一显示参数将满足第一预设条件的第二角色对象相关联的所述角色操作对象在对应的窗口位中进行渲染。参照图5所示,满足第一预设条件的第二角色对象相关联的角色操作对象通过所述第一显示参数进行渲染(参见图5中所示的角色操作对象b12,所述角色操作对象b12的外环边缘具有区别于其他角色操作对象的渲染效果,使之具有高亮的显示效果);相比于其他的角色操作对象所述通过所述第一显示参数进行渲染的角色操作对象(如角色操作对象b12)具有明显的区别特征,便于终端使用者能够马上识别出所述角色操作 对象,从而便于所述终端使用者在后续操作中能够快速对这种具有明显区别特征的角色操作对象进行选定。
本领域技术人员应当理解,本发明实施例的终端中各处理单元的功能,可参照前述信息处理方法的相关描述而理解,本发明实施例的信息处理终端中各处理单元,可通过实现本发明实施例所述的功能的模拟电路而实现,也可以通过执行本发明实施例所述的功能的软件在智能终端上的运行而实现。
实施例五
基于实施例四,本发明实施例还提供了一种终端。图13为本发明实施例五的终端的组成结构示意图;如图13所示,所述终端包括:渲染处理单元61、部署单元62、检测单元63、操作执行单元64、获取单元65和通讯单元66;其中,
所述渲染处理单元61,配置为执行软件应用并进行渲染得到图形用户界面;在所述图形用户界面上渲染出至少一个虚拟资源对象,所述虚拟资源对象中的至少一个被配置为根据输入的第一用户命令而执行第一虚拟操作的第一角色对象;还配置为依据第一显示参数将与所述检测单元63检测到的与所述第二角色对象相关联的角色操作对象在至少一个所述窗口位中进行渲染;
所述部署单元62,配置为部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;
所述检测单元63,配置为检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象;
所述操作执行单元64,配置为检测到对所述角色器对象中至少一个依据所述第一显示参数渲染过的角色操作对象的选择操作手势时,所述第一角色对象对相应的第二角色对象执行所述第一虚拟操作中的至少一种;
所述获取单元65,配置为获取所述图形用户界面中的第二角色对象的状态属性信息;
所述通讯单元66,配置为同步所述获取单元65获取的状态属性信息至服务器,以及从所述服务器中获得所述角色器对象中的角色操作对象相关联的角色对象的状态属性信息。
作为一种实施方式,所述渲染处理单元61,配置为依据获得的所述状态属性信息按预设方式将所述第二角色对象相关联的角色操作对象在对应的所述窗口位中进行渲染。
本实施例中,所述渲染处理单元61、所述部署单元62、所述检测单元63和所述操作执行单元64的功能可参照实施例四的描述,这里不再赘述。区别在于,所述获取单元65获取所述图形用户界面中的第二角色对象的状态属性信息;由于所述第一角色对象和所述第二角色对象所在的虚拟空间基于所述软件应用的设置可以相对较大,则在所述终端渲染得到的所述图形用户界面所显示的事业图像中,包括所述第一角色对象,可能包括所述第二角色对象,当然,也可能不包括所述第二角色对象。本实施例中,所述终端获得在所述图形用户界面中包含的第二角色对象的状态属性信息,以及将所述状态属性信息关联对应的第二角色对象同步至服务器。其中,所述状态属性信息包括但不限于所述第二角色对象的血量值、生命值或技能属性信息。
本实施例中,所述通讯单元66按预设规则从所述服务器中获得所述角色器对象中的角色操作对象相关联的角色对象的状态属性信息,以便当所述终端自身的图形用户界面中不包含至少部分第二角色对象从而无法获得所述至少部分第二角色对象的状态属性信息时,可以通过其他终端同步至所述服务器中的第二角色对象及关联的状态属性信息,从而获得所述角色器对象中包括的所有角色操作对象相关联的第二角色对象的状态属性信 息。其中,所述终端和所述其他终端属于同一群组,可以理解为,所述终端进行操控的第一角色对象和所述其他终端进行操控的第一角色对象在所述游戏系统中属于同一群组,共同针对同属于另一群组的第二角色对象进行虚拟操作。在所述终端的图形用户界面中不包括所有的第二角色对象时,所述其他终端的图形用户界面中可能包括至少部分第二角色对象,从而基于属于同一群组的至少一个终端对自身的图形用户界面中包括的第二角色对象的状态属性信息的获取,实现第二角色对象的状态属性信息的互相同步。
本实施例中,终端通过自身获取以及服务器同步的第二角色对象的状态属性信息后,在所述角色器对象中按预设方式将所述第二角色对象相关联的角色操作对象在对应的所述窗口位中进行渲染。具体的,参照图9所示,以所述状态属性信息为血量值为例,在第二角色对象相关联的角色操作对象(参见图9中角色操作对象b22所示)的外环区域作为血槽显示区域b221,通过所述血槽显示区域b221中血量在所述血槽显示区域中的比例关系表征对应的第二角色对象当前的血量值。当然,本发明实施例中所述状态属性信息将所述第二角色对象相关联的角色操作对象在对应的所述窗口位中进行渲染的方式不限于图9中所示。
本领域技术人员应当理解,本发明实施例的终端中各处理单元的功能,可参照前述信息处理方法的相关描述而理解,本发明实施例的信息处理终端中各处理单元,可通过实现本发明实施例所述的功能的模拟电路而实现,也可以通过执行本发明实施例所述的功能的软件在智能终端上的运行而实现。
本发明实施例四和实施例五中,所述终端中的渲染处理单元61、部署单元62、检测单元63、操作执行单元64和获取单元65,在实际应用中,均可由所述终端中的中央处理器(CPU,Central Processing Unit)、数字信 号处理器(DSP,Digital Signal Processor)或可编程门阵列(FPGA,Field-Programmable Gate Array)实现;所述终端中的通讯单元66,在实际应用中,可由所述终端中的收发天线或通讯接口实现。
实施例六
本发明实施例还提供了一种终端。所述终端可以为PC这种电子设备,还可以为如平板电脑,手提电脑、智能手机等便携电子设备,通过安装软件应用(如游戏应用)实现游戏系统在所述终端上执行,所述终端至少包括用于存储数据的存储器和用于数据处理的处理器。其中,对于用于数据处理的处理器而言,在执行处理时,可以采用微处理器、CPU、DSP或FPGA实现;对于存储器来说,包含操作指令,该操作指令可以为计算机可执行代码,通过所述操作指令来实现上述本发明实施例信息处理方法流程中的各个步骤。
图14为本发明实施例六的终端的组成结构示意图;如图14所示,所述终端包括:处理器71和显示器72;所述处理器71,配置为执行软件应用并在所述显示器72上进行渲染以得到图形用户界面,所述处理器71、图形用户界面和所述软件应用在游戏系统上被实施;
所述处理器71,还配置为在所述图形用户界面上渲染出至少一个虚拟资源对象,所述虚拟资源对象中的至少一个被配置为根据输入的第一用户命令而执行第一虚拟操作的第一角色对象;
部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;
检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象,依据第一显示参数将与检测到的与所述第二角色对象相关联的角色操作对象在至少一个所述窗口位中进行渲染;
检测到对所述角色器对象中至少一个依据所述第一显示参数渲染过的 角色操作对象的选择操作手势时,所述第一角色对象对相应的第二角色对象执行所述第一虚拟操作中的至少一种。
具体的,所述处理器71检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象,包括:
检测所述图形用户界面中与所述第一角色对象之间的距离小于第一预设阈值的第二角色对象。
作为一种实施方式,所述处理器71,还配置为检测到对所述角色器对象中至少一个依据所述第一显示参数渲染过的角色操作对象的选择操作手势之前,检测与所述第一角色对象之间的距离满足第一预设条件的第二角色对象中,与所述第一角色对象之间的距离满足第二预设条件的至少部分第二角色对象,依据第二显示参数将与检测到的至少部分第二角色对象相关联的角色操作对象在所述至少一个所述窗口位中进行渲染;其中,按所述第二显示参数进行渲染与按所述第一显示参数进行渲染的显示效果不同。
具体的,所述处理器71检测与所述第一角色对象之间的距离满足第一预设条件的第二角色对象中,至少部分第二角色对象与所述第一角色对象之间的距离满足第二预设条件,包括:
检测与所述第一角色对象之间的距离满足第一预设条件的第二角色对象中,与所述第一角色对象之间的距离达到第二预设阈值的第二角色对象;其中,所述第二预设阈值大于等于所述第一预设阈值。
作为一种实施方式,所述终端还包括通讯设备74;
所述处理器71,还配置为获取所述图形用户界面中的第二角色对象的状态属性信息,通过所述通讯设备74同步所述状态属性信息至服务器;以及通过所述通讯接口从所述服务器中获得所述角色器对象中的角色操作对象相关联的角色对象的状态属性信息。
相应的,所述处理器71,还配置为依据获得的所述状态属性信息按预设方式将所述第二角色对象相关联的角色操作对象在对应的所述窗口位中进行渲染。
本实施例中所述终端包括:处理器71、显示器72、存储器73、输入设备76、总线75和通讯设备74;所述处理器71、存储器73、输入设备76、显示器72和通讯设备74均通过总线75连接,所述总线75用于所述处理器71、存储器73、显示器72和通讯设备74之间传输数据。
其中,所述输入设备76主要配置为获得用户的输入操作,当所述终端不同时,所述输入设备76也可能不同。例如,当所述客户端为PC时,所述输入设备76可以为鼠标、键盘等输入设备76;当所述终端为智能手机、平板电脑等便携设备时,所述输入设备76可以为触控屏。
本实施例中,所述存储器73中存储有计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于本发明实施例所述的信息处理方法。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本发明各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本发明上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。
工业实用性
本发明实施例通过部署在图形用户界面中的角色选择区域的角色器对象中的窗口位,对第一角色对象进行信息交互的第二角色对象相关联的角 色操作对象在对应的窗口位中进行渲染,并且将与所述第一角色对象之间的距离满足第一预设条件的第二角色对象相关联的角色操作对象按第一显示参数进行渲染,也即将与所述第一角色对象之间的距离满足第一预设条件的第二角色对象相关联的UI头像按第一显示参数进行渲染,使得所述UI头像具有区别于其他UI头像的显示效果,便于用户在选定目标角色操作对象时,能够基于该区别的显示效果通过对所述角色操作对象的选择操作手势快速、精准的选定目标角色对象,大大提升了用户在交互过程中的操作体验。

Claims (19)

  1. 一种信息处理方法,通过在终端的处理器上执行软件应用并在所述终端的显示器上进行渲染,以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏系统上被实施;所述方法包括:
    在所述图形用户界面上渲染出至少一个虚拟资源对象,所述虚拟资源对象中的至少一个被配置为根据输入的第一用户命令而执行第一虚拟操作的第一角色对象;
    部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;
    检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象,依据第一显示参数将与检测到的与所述第二角色对象相关联的角色操作对象在至少一个所述窗口位中进行渲染;
    检测到对所述角色器对象中至少一个依据所述第一显示参数渲染过的角色操作对象的选择操作手势时,所述第一角色对象对相应的第二角色对象执行所述第一虚拟操作中的至少一种。
  2. 根据权利要求1所述的方法,其中,所述检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象,包括:
    检测所述图形用户界面中与所述第一角色对象之间的距离小于第一预设阈值的第二角色对象。
  3. 根据权利要求1所述的方法,其中,所述检测到对所述角色器对象中至少一个依据所述第一显示参数渲染过的角色操作对象的选择操作手势之前,所述方法还包括:
    检测与所述第一角色对象之间的距离满足第一预设条件的第二角色对象中,与所述第一角色对象之间的距离满足第二预设条件的至少部分第二角色对象,依据第二显示参数将与检测到的至少部分第二角色对象相关联 的角色操作对象在所述至少一个所述窗口位中进行渲染。
    其中,按所述第二显示参数进行渲染与按所述第一显示参数进行渲染的显示效果不同。
  4. 根据权利要求3所述的方法,其中,所述检测与所述第一角色对象之间的距离满足第一预设条件的第二角色对象中,与所述第一角色对象之间的距离满足第二预设条件的至少部分第二角色对象,包括:
    检测与所述第一角色对象之间的距离满足第一预设条件的第二角色对象中,与所述第一角色对象之间的距离达到第二预设阈值的第二角色对象;其中,所述第二预设阈值大于等于所述第一预设阈值。
  5. 根据权利要求1所述的方法,其中,所述方法还包括:获取所述图形用户界面中的第二角色对象的状态属性信息,同步所述状态属性信息至服务器;
    以及从所述服务器中获得所述角色器对象中的角色操作对象相关联的角色对象的状态属性信息。
  6. 根据权利要求5所述的方法,其中,所述方法还包括:依据获得的所述状态属性信息按预设方式将所述第二角色对象相关联的角色操作对象在对应的所述窗口位中进行渲染。
  7. 一种终端,所述终端包括:渲染处理单元、部署单元、检测单元和操作执行单元;其中,
    所述渲染处理单元,配置为执行软件应用并进行渲染得到图形用户界面;在所述图形用户界面上渲染出至少一个虚拟资源对象,所述虚拟资源对象中的至少一个被配置为根据输入的第一用户命令而执行第一虚拟操作的第一角色对象;还配置为依据第一显示参数将与所述检测单元检测到的与所述第二角色对象相关联的角色操作对象在至少一个所述窗口位中进行渲染;
    所述部署单元,配置为部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;
    所述检测单元,配置为检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象;
    所述操作执行单元,配置为检测到对所述角色器对象中至少一个依据所述第一显示参数渲染过的角色操作对象的选择操作手势时,所述第一角色对象对相应的第二角色对象执行所述第一虚拟操作中的至少一种。
  8. 根据权利要求7所述的终端,其中,所述检测单元,配置为检测所述图形用户界面中与所述第一角色对象之间的距离小于第一预设阈值的第二角色对象。
  9. 根据权利要求7所述的终端,其中,所述检测单元,还配置为检测与所述第一角色对象之间的距离满足第一预设条件的第二角色对象中,与所述第一角色对象之间的距离满足第二预设条件的至少部分第二角色对象;
    相应的,所述渲染处理单元,配置为依据第二显示参数将与所述检测单元检测到的至少部分第二角色对象相关联的角色操作对象在所述至少一个所述窗口位中进行渲染;其中,按所述第二显示参数进行渲染与按所述第一显示参数进行渲染的显示效果不同。
  10. 根据权利要求9所述的终端,其中,所述检测单元,配置为检测与所述第一角色对象之间的距离满足第一预设条件的第二角色对象中,与所述第一角色对象之间的距离达到第二预设阈值的第二角色对象;其中,所述第二预设阈值大于等于所述第一预设阈值。
  11. 根据权利要求7所述的终端,其中,所述终端还包括获取单元和通讯单元;其中,
    所述获取单元,配置为获取所述图形用户界面中的第二角色对象的状 态属性信息;
    所述通讯单元,配置为同步所述获取单元获取的状态属性信息至服务器,以及从所述服务器中获得所述角色器对象中的角色操作对象相关联的角色对象的状态属性信息。
  12. 根据权利要求11所述的终端,其中,所述渲染处理单元,配置为依据获得的所述状态属性信息按预设方式将所述第二角色对象相关联的角色操作对象在对应的所述窗口位中进行渲染。
  13. 一种终端,所述终端包括:处理器和显示器;所述处理器,配置为执行软件应用并在所述显示器上进行渲染以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏系统上被实施;
    所述处理器,还配置为在所述图形用户界面上渲染出至少一个虚拟资源对象,所述虚拟资源对象中的至少一个被配置为根据输入的第一用户命令而执行第一虚拟操作的第一角色对象;
    部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;
    检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象,依据第一显示参数将与检测到的与所述第二角色对象相关联的角色操作对象在至少一个所述窗口位中进行渲染;
    检测到对所述角色器对象中至少一个依据所述第一显示参数渲染过的角色操作对象的选择操作手势时,所述第一角色对象对相应的第二角色对象执行所述第一虚拟操作中的至少一种。
  14. 根据权利要求13所述的终端,其中,所述处理器检测所述图形用户界面中与所述第一角色对象之间的距离满足第一预设条件的第二角色对象,包括:
    检测所述图形用户界面中与所述第一角色对象之间的距离小于第一预 设阈值的第二角色对象。
  15. 根据权利要求13所述的终端,其中,所述处理器,还配置为检测到对所述角色器对象中至少一个依据所述第一显示参数渲染过的角色操作对象的选择操作手势之前,检测与所述第一角色对象之间的距离满足第一预设条件的第二角色对象中,与所述第一角色对象之间的距离满足第二预设条件的至少部分第二角色对象,依据第二显示参数将与检测到的至少部分第二角色对象相关联的角色操作对象在所述至少一个所述窗口位中进行渲染;其中,按所述第二显示参数进行渲染与按所述第一显示参数进行渲染的显示效果不同。
  16. 根据权利要求15所述的终端,其中,所述处理器检测与所述第一角色对象之间的距离满足第一预设条件的第二角色对象中,至少部分第二角色对象与所述第一角色对象之间的距离满足第二预设条件,包括:
    检测与所述第一角色对象之间的距离满足第一预设条件的第二角色对象中,与所述第一角色对象之间的距离达到第二预设阈值的第二角色对象;其中,所述第二预设阈值大于等于所述第一预设阈值。
  17. 根据权利要求13所述的终端,其中,所述终端还包括通讯设备;
    所述处理器,还配置为获取所述图形用户界面中的第二角色对象的状态属性信息,通过所述通讯设备同步所述状态属性信息至服务器;以及通过所述通讯接口从所述服务器中获得所述角色器对象中的角色操作对象相关联的角色对象的状态属性信息。
  18. 根据权利要求17所述的终端,其中,所述处理器,还配置为依据获得的所述状态属性信息按预设方式将所述第二角色对象相关联的角色操作对象在对应的所述窗口位中进行渲染。
  19. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行权利要求1至6任一项所述的信 息处理方法。
PCT/CN2016/081041 2015-09-29 2016-05-04 一种信息处理方法、终端和计算机存储介质 WO2017054450A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
EP16850075.9A EP3273334B1 (en) 2015-09-29 2016-05-04 Information processing method, terminal and computer storage medium
JP2018505518A JP6529659B2 (ja) 2015-09-29 2016-05-04 情報処理方法、端末及びコンピュータ記憶媒体
CA2982868A CA2982868C (en) 2015-09-29 2016-05-04 Method for performing virtual operations on a character object, terminal, and computer storage medium
KR1020177033331A KR102018212B1 (ko) 2015-09-29 2016-05-04 정보 처리 방법, 단말 및 컴퓨터 저장 매체
MYPI2017703956A MY192140A (en) 2015-09-29 2016-05-04 Information processing method, terminal, and computer storage medium
US15/725,140 US10639549B2 (en) 2015-09-29 2017-10-04 Information processing method, terminal, and computer storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510633275.3 2015-09-29
CN201510633275.3A CN105335064B (zh) 2015-09-29 2015-09-29 一种信息处理方法和终端

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/725,140 Continuation-In-Part US10639549B2 (en) 2015-09-29 2017-10-04 Information processing method, terminal, and computer storage medium

Publications (1)

Publication Number Publication Date
WO2017054450A1 true WO2017054450A1 (zh) 2017-04-06

Family

ID=55285650

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/081041 WO2017054450A1 (zh) 2015-09-29 2016-05-04 一种信息处理方法、终端和计算机存储介质

Country Status (8)

Country Link
US (1) US10639549B2 (zh)
EP (1) EP3273334B1 (zh)
JP (1) JP6529659B2 (zh)
KR (1) KR102018212B1 (zh)
CN (1) CN105335064B (zh)
CA (1) CA2982868C (zh)
MY (1) MY192140A (zh)
WO (1) WO2017054450A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11811681B1 (en) 2022-07-12 2023-11-07 T-Mobile Usa, Inc. Generating and deploying software architectures using telecommunication resources

Families Citing this family (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US8676904B2 (en) 2008-10-02 2014-03-18 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US10417037B2 (en) 2012-05-15 2019-09-17 Apple Inc. Systems and methods for integrating third party services with a digital assistant
CN113470641B (zh) 2013-02-07 2023-12-15 苹果公司 数字助理的语音触发器
KR101772152B1 (ko) 2013-06-09 2017-08-28 애플 인크. 디지털 어시스턴트의 둘 이상의 인스턴스들에 걸친 대화 지속성을 가능하게 하기 위한 디바이스, 방법 및 그래픽 사용자 인터페이스
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US10460227B2 (en) 2015-05-15 2019-10-29 Apple Inc. Virtual assistant in a communication session
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
CN105335064B (zh) 2015-09-29 2017-08-15 腾讯科技(深圳)有限公司 一种信息处理方法和终端
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
CN105194873B (zh) * 2015-10-10 2019-01-04 腾讯科技(成都)有限公司 一种信息处理方法、终端及计算机存储介质
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
AU2017100670C4 (en) 2016-06-12 2019-11-21 Apple Inc. User interfaces for retrieving contextually relevant media content
KR101866198B1 (ko) * 2016-07-06 2018-06-11 (주) 덱스인트게임즈 터치스크린 기반의 게임제공방법 및 프로그램
CN106774824B (zh) * 2016-10-26 2020-02-04 网易(杭州)网络有限公司 虚拟现实交互方法及装置
CN106422329A (zh) * 2016-11-01 2017-02-22 网易(杭州)网络有限公司 游戏操控方法和装置
CN106512406A (zh) * 2016-11-01 2017-03-22 网易(杭州)网络有限公司 游戏操控方法和装置
JP6143934B1 (ja) * 2016-11-10 2017-06-07 株式会社Cygames 情報処理プログラム、情報処理方法、及び情報処理装置
CN106354418B (zh) * 2016-11-16 2019-07-09 腾讯科技(深圳)有限公司 一种基于触摸屏的操控方法和装置
WO2018095366A1 (zh) 2016-11-24 2018-05-31 腾讯科技(深圳)有限公司 视频推荐确定、信息显示、基于帧同步的数据处理方法
CN107132979A (zh) * 2017-03-14 2017-09-05 网易(杭州)网络有限公司 在移动设备游戏中精确选择目标的交互方法、装置及计算机可读存储介质
US20180292952A1 (en) * 2017-04-05 2018-10-11 Riot Games, Inc. Methods and systems for object selection
DK180048B1 (en) 2017-05-11 2020-02-04 Apple Inc. MAINTAINING THE DATA PROTECTION OF PERSONAL INFORMATION
DK201770429A1 (en) 2017-05-12 2018-12-14 Apple Inc. LOW-LATENCY INTELLIGENT AUTOMATED ASSISTANT
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
DK201770411A1 (en) 2017-05-15 2018-12-20 Apple Inc. MULTI-MODAL INTERFACES
US10303715B2 (en) 2017-05-16 2019-05-28 Apple Inc. Intelligent automated assistant for media exploration
CN107441705B (zh) * 2017-07-27 2018-07-20 腾讯科技(深圳)有限公司 对象显示方法和装置及存储介质
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
CN108579089B (zh) * 2018-05-09 2021-11-12 网易(杭州)网络有限公司 虚拟道具控制方法及装置、存储介质、电子设备
DK201870355A1 (en) 2018-06-01 2019-12-16 Apple Inc. VIRTUAL ASSISTANT OPERATION IN MULTI-DEVICE ENVIRONMENTS
DK180639B1 (en) 2018-06-01 2021-11-04 Apple Inc DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT
CN108970114A (zh) * 2018-08-21 2018-12-11 苏州蜗牛数字科技股份有限公司 一种通过自定义映射按键实现视野调整的方法
US11446579B2 (en) * 2018-09-11 2022-09-20 Ncsoft Corporation System, server and method for controlling game character
US11462215B2 (en) 2018-09-28 2022-10-04 Apple Inc. Multi-modal inputs for voice commands
CN109582136B (zh) * 2018-11-13 2022-05-03 深圳市创凯智能股份有限公司 三维窗口手势导航方法、装置、移动终端及存储介质
CN109675307B (zh) * 2019-01-10 2020-02-21 网易(杭州)网络有限公司 游戏中的显示控制方法、装置、存储介质、处理器及终端
CN109568956B (zh) * 2019-01-10 2020-03-10 网易(杭州)网络有限公司 游戏中的显示控制方法、装置、存储介质、处理器及终端
US10786734B2 (en) * 2019-02-20 2020-09-29 Supercell Oy Method for facilitating user interactions in gaming environment
US11348573B2 (en) 2019-03-18 2022-05-31 Apple Inc. Multimodality in digital assistant systems
US11307752B2 (en) 2019-05-06 2022-04-19 Apple Inc. User configurable task triggers
DK201970509A1 (en) 2019-05-06 2021-01-15 Apple Inc Spoken notifications
US11140099B2 (en) 2019-05-21 2021-10-05 Apple Inc. Providing message response suggestions
CN110115838B (zh) * 2019-05-30 2021-10-29 腾讯科技(深圳)有限公司 虚拟环境中生成标记信息的方法、装置、设备及存储介质
US11468890B2 (en) 2019-06-01 2022-10-11 Apple Inc. Methods and user interfaces for voice-based control of electronic devices
JP6818092B2 (ja) * 2019-06-25 2021-01-20 株式会社コロプラ ゲームプログラム、ゲーム方法、および情報端末装置
CN110368691B (zh) * 2019-07-19 2023-09-19 腾讯科技(深圳)有限公司 多人在线对战程序中的提醒信息发送方法、装置及终端
CN110598035B (zh) * 2019-09-08 2023-06-13 北京智明星通科技股份有限公司 一种手机游戏虚拟人物形象推荐方法、装置和移动终端
CN110807728B (zh) 2019-10-14 2022-12-13 北京字节跳动网络技术有限公司 对象的显示方法、装置、电子设备及计算机可读存储介质
CN110882537B (zh) * 2019-11-12 2023-07-25 北京字节跳动网络技术有限公司 一种交互方法、装置、介质和电子设备
CN111013135A (zh) * 2019-11-12 2020-04-17 北京字节跳动网络技术有限公司 一种交互方法、装置、介质和电子设备
CN111013139B (zh) * 2019-11-12 2023-07-25 北京字节跳动网络技术有限公司 角色交互方法、系统、介质和电子设备
CN111068311B (zh) * 2019-11-29 2023-06-23 珠海豹趣科技有限公司 游戏场景的显示控制方法及装置
US11061543B1 (en) 2020-05-11 2021-07-13 Apple Inc. Providing relevant data items based on context
CN111589114B (zh) 2020-05-12 2023-03-10 腾讯科技(深圳)有限公司 虚拟对象的选择方法、装置、终端及存储介质
CN111760267B (zh) * 2020-07-06 2024-08-27 网易(杭州)网络有限公司 游戏中的信息发送方法及装置、存储介质、电子设备
US11490204B2 (en) 2020-07-20 2022-11-01 Apple Inc. Multi-device audio adjustment coordination
US11438683B2 (en) 2020-07-21 2022-09-06 Apple Inc. User identification using headphones
CN111821691A (zh) * 2020-07-24 2020-10-27 腾讯科技(深圳)有限公司 界面显示方法、装置、终端及存储介质
CN112057856B (zh) * 2020-09-17 2024-01-30 网易(杭州)网络有限公司 信息提示方法、装置和终端设备
CN112245920A (zh) * 2020-11-13 2021-01-22 腾讯科技(深圳)有限公司 虚拟场景的显示方法、装置、终端及存储介质
KR102589889B1 (ko) * 2021-02-23 2023-10-17 (주)팀스노우볼 게임 ui 분석 방법
JP7416980B2 (ja) * 2021-05-25 2024-01-17 ネットイーズ (ハンチョウ) ネットワーク カンパニー リミテッド ゲームシーンの処理方法、装置、記憶媒体及び電子デバイス
CN113318444B (zh) * 2021-06-08 2023-01-10 天津亚克互动科技有限公司 角色的渲染方法和装置、电子设备和存储介质
CN116059628A (zh) * 2021-06-25 2023-05-05 网易(杭州)网络有限公司 游戏的交互方法、装置、电子设备及可读介质
CN113398566B (zh) * 2021-07-16 2024-10-01 网易(杭州)网络有限公司 游戏的显示控制方法、装置、存储介质及计算机设备
CN113559505B (zh) * 2021-07-28 2024-02-02 网易(杭州)网络有限公司 游戏中的信息处理方法、装置及移动终端

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1743043A (zh) * 2005-06-19 2006-03-08 珠海市西山居软件有限公司 一种网络游戏系统及其实现方法
CN103096134A (zh) * 2013-02-08 2013-05-08 广州博冠信息科技有限公司 一种基于视频直播和游戏的数据处理方法和设备
CN103157281A (zh) * 2013-04-03 2013-06-19 广州博冠信息科技有限公司 一种二维游戏场景显示的方法和设备
US20140113718A1 (en) * 2012-04-26 2014-04-24 Riot Games, Inc. Systems and methods that enable a spectator's experience for online active games
CN105335064A (zh) * 2015-09-29 2016-02-17 腾讯科技(深圳)有限公司 一种信息处理方法、终端和计算机存储介质

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6273818B1 (en) * 1999-10-25 2001-08-14 Square Co., Ltd. Video game apparatus and method and storage medium
JP3888542B2 (ja) * 2002-12-05 2007-03-07 任天堂株式会社 ゲーム装置およびゲームプログラム
JP4057945B2 (ja) * 2003-04-25 2008-03-05 株式会社バンダイナムコゲームス プログラム、情報記憶媒体及びゲーム装置
JP3880008B2 (ja) * 2004-12-21 2007-02-14 株式会社光栄 キャラクタ集団移動制御プログラム、記憶媒体及びゲーム装置
JP4291816B2 (ja) * 2005-12-27 2009-07-08 株式会社コナミデジタルエンタテインメント ゲーム装置、ゲーム装置の制御方法及びプログラム
US8210943B1 (en) * 2006-05-06 2012-07-03 Sony Computer Entertainment America Llc Target interface
US20100302138A1 (en) 2009-05-29 2010-12-02 Microsoft Corporation Methods and systems for defining or modifying a visual representation
JP2011212347A (ja) * 2010-03-31 2011-10-27 Namco Bandai Games Inc プログラム、情報記憶媒体、端末及びネットワークシステム
JP5452429B2 (ja) * 2010-09-14 2014-03-26 株式会社スクウェア・エニックス ゲーム装置、ゲームプログラム及びゲームの進行方法
US20120122561A1 (en) * 2010-11-12 2012-05-17 Bally Gaming, Inc. System and method for tournament gaming using social network based team formation
US20130005417A1 (en) * 2011-06-30 2013-01-03 Peter Schmidt Mobile device action gaming
US8814674B2 (en) * 2012-05-24 2014-08-26 Supercell Oy Graphical user interface for a gaming system
EP3517190B1 (en) * 2013-02-01 2022-04-20 Sony Group Corporation Information processing device, terminal device, information processing method, and programme
JP5581434B1 (ja) * 2013-10-31 2014-08-27 株式会社 ディー・エヌ・エー ゲームプログラム、及び、情報処理装置
CN104618797B (zh) * 2015-02-06 2018-02-13 腾讯科技(北京)有限公司 信息处理方法、装置及客户端
CN205064362U (zh) 2015-05-12 2016-03-02 锘威科技(深圳)有限公司 一种新型扇叶
JP6632819B2 (ja) * 2015-06-30 2020-01-22 株式会社バンダイナムコエンターテインメント プログラム、ゲーム装置及びサーバシステム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1743043A (zh) * 2005-06-19 2006-03-08 珠海市西山居软件有限公司 一种网络游戏系统及其实现方法
US20140113718A1 (en) * 2012-04-26 2014-04-24 Riot Games, Inc. Systems and methods that enable a spectator's experience for online active games
CN103096134A (zh) * 2013-02-08 2013-05-08 广州博冠信息科技有限公司 一种基于视频直播和游戏的数据处理方法和设备
CN103157281A (zh) * 2013-04-03 2013-06-19 广州博冠信息科技有限公司 一种二维游戏场景显示的方法和设备
CN105335064A (zh) * 2015-09-29 2016-02-17 腾讯科技(深圳)有限公司 一种信息处理方法、终端和计算机存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3273334A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11811681B1 (en) 2022-07-12 2023-11-07 T-Mobile Usa, Inc. Generating and deploying software architectures using telecommunication resources

Also Published As

Publication number Publication date
US20180028918A1 (en) 2018-02-01
KR102018212B1 (ko) 2019-09-04
KR20170137913A (ko) 2017-12-13
CN105335064B (zh) 2017-08-15
CN105335064A (zh) 2016-02-17
JP6529659B2 (ja) 2019-06-12
EP3273334B1 (en) 2023-05-10
US10639549B2 (en) 2020-05-05
CA2982868C (en) 2023-07-18
EP3273334A4 (en) 2018-05-30
CA2982868A1 (en) 2017-04-06
JP2018512988A (ja) 2018-05-24
MY192140A (en) 2022-07-29
EP3273334A1 (en) 2018-01-24

Similar Documents

Publication Publication Date Title
WO2017054450A1 (zh) 一种信息处理方法、终端和计算机存储介质
US10661171B2 (en) Information processing method, terminal, and computer storage medium
US11003261B2 (en) Information processing method, terminal, and computer storage medium
US10814221B2 (en) Method for locking target in game scenario and terminal
JP7502012B2 (ja) 情報処理方法、端末及びコンピュータ記憶媒体
KR102034367B1 (ko) 정보 처리 방법과 단말기, 및 컴퓨터 저장 매체
WO2017054464A1 (zh) 一种信息处理方法、终端及计算机存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16850075

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2982868

Country of ref document: CA

REEP Request for entry into the european phase

Ref document number: 2016850075

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2018505518

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20177033331

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE