WO2017054452A1 - 一种信息处理方法、终端及计算机存储介质 - Google Patents

一种信息处理方法、终端及计算机存储介质 Download PDF

Info

Publication number
WO2017054452A1
WO2017054452A1 PCT/CN2016/081051 CN2016081051W WO2017054452A1 WO 2017054452 A1 WO2017054452 A1 WO 2017054452A1 CN 2016081051 W CN2016081051 W CN 2016081051W WO 2017054452 A1 WO2017054452 A1 WO 2017054452A1
Authority
WO
WIPO (PCT)
Prior art keywords
role
character
terminal
user interface
graphical user
Prior art date
Application number
PCT/CN2016/081051
Other languages
English (en)
French (fr)
Inventor
陈宇
唐永
龚伟
翁建苗
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP16850077.5A priority Critical patent/EP3285156B1/en
Priority to CA2985867A priority patent/CA2985867C/en
Priority to KR1020177035385A priority patent/KR20180005689A/ko
Priority to JP2017564016A priority patent/JP6830447B2/ja
Priority to MYPI2017704330A priority patent/MY195861A/en
Publication of WO2017054452A1 publication Critical patent/WO2017054452A1/zh
Priority to US15/725,146 priority patent/US10661171B2/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/33Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • A63F13/49Saving the game status; Pausing or ending the game
    • A63F13/497Partially or entirely replaying previous game actions
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/847Cooperative playing, e.g. requiring coordinated actions from several players to achieve a common goal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/33Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections
    • A63F13/335Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5526Game data structure
    • A63F2300/5533Game data structure using program state or machine event data, e.g. server keeps track of the state of multiple players on in a multiple player game

Definitions

  • the present invention relates to information processing technologies, and in particular, to an information processing method, a terminal, and a computer storage medium.
  • GUI graphical user interface
  • a graphical user interface rendered on a large screen or a large screen often only displays a part of the virtual area where the virtual character operated by the user is located, so that when the user controls, the graphical user interface may not include the same group as the user.
  • the target object manipulated by the group member in this case, the user wants to get the view of the group member, and requires multiple operations (such as sliding operations) to move the character until the character moves to the vicinity of the target object, thus the current graphic
  • the image presented by the graphical user interface controlled by the group member is obtained in the user interface, that is, the field of view of the group members is obtained.
  • This process has a long manipulation time and cannot meet the requirements of the quickness of information interaction.
  • there is currently no effective solution in the related technology there is currently no effective solution in the related technology.
  • the embodiments of the present invention are intended to provide an information processing method, a terminal, and a computer storage medium, which can quickly obtain a view image of a group member in the process of information interaction, thereby improving the user experience.
  • An embodiment of the present invention provides an information processing method, by executing a software application on a processor of a terminal and rendering on a display of the terminal, to obtain a graphical user interface, the processor, a graphical user interface, and the Software applications are implemented on a gaming system; the methods include:
  • At least one character object deployed in at least one character selection area of the graphical user interface includes at least one window bit
  • a field of view image captured by the virtual lens associated with at least one of the character action objects is rendered on the graphical user interface.
  • the embodiment of the present invention further provides a terminal, where the terminal includes: a rendering processing unit, a deployment unit, a detecting unit, and an operation executing unit;
  • the rendering processing unit is configured to execute a software application and render to obtain a graphical user interface; render at least one virtual resource object on the graphical user interface; and further configured to render the operation execution on the graphical user interface a view image captured by the unit and the virtual lens associated with the at least one of the character operation objects;
  • the deployment unit configured to deploy at least one role object of the at least one role selection area of the graphical user interface, including at least one window bit;
  • the detecting unit is configured to detect a view acquiring gesture for at least one character operating object in the character object;
  • the operation execution unit is configured to detect, when the detection unit detects the object object When at least one character operates the object's field of view acquisition gesture, a field of view image captured by the virtual lens associated with at least one of the character action objects is obtained.
  • An embodiment of the present invention further provides a terminal, where the terminal includes: a processor and a display; the processor is configured to execute a software application and perform rendering on the display to obtain a graphical user interface, the processor, A graphical user interface and the software application are implemented on a gaming system;
  • the processor is configured to render at least one virtual resource object on the graphical user interface; the at least one role object object deployed in the at least one role selection area of the graphical user interface includes at least one window bit;
  • a view acquisition gesture for at least one of the character action objects is detected, a view image captured by the virtual lens associated with the at least one of the character action objects is rendered on the graphical user interface.
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the information processing method according to the embodiment of the invention.
  • the information processing method, the terminal, and the computer storage medium of the embodiment of the present invention are related to the second role object belonging to the same group as the user role object by the window bit in the character device object of the role selection area deployed in the graphical user interface.
  • the associated role operation object is rendered in the corresponding window bit, so that the user can quickly obtain the visual field image of the corresponding second character object by acquiring the gesture for the field operation object of the character, thereby greatly improving the operation of the user in the interaction process.
  • FIG. 1 is a schematic diagram of an application architecture of an information processing method according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of an information processing method according to Embodiment 1 of the present invention.
  • FIG. 3 is a first schematic diagram of a graphical user interface of an information processing method according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of an information processing method according to Embodiment 2 of the present invention.
  • FIG. 5 is a schematic flowchart diagram of an information processing method according to Embodiment 3 of the present invention.
  • FIG. 6 is a second schematic diagram of a graphical user interface of an information processing method according to an embodiment of the present invention.
  • FIG. 7 is a schematic flowchart diagram of an information processing method according to Embodiment 4 of the present invention.
  • FIG. 8 is a third schematic diagram of a graphical user interface of an information processing method according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of an interaction application of an information processing method according to an embodiment of the present invention.
  • FIG. 10 is a fourth schematic diagram of a graphical user interface in an information processing method according to an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of a terminal of a terminal according to Embodiment 5 of the present invention.
  • FIG. 12 is a schematic structural diagram of a terminal of a sixth embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of a terminal of a seventh embodiment of the present invention.
  • FIG. 1 is a schematic diagram of an application architecture of an information processing method according to an embodiment of the present invention
  • the application architecture includes: a server 101 and at least one terminal, where the terminal includes: The terminal 102, the terminal 103, the terminal 104, the terminal 105, and the terminal 106, wherein the at least one terminal can establish a connection with the server 101 through a network 100 such as a wired network or a wireless network.
  • the terminal includes a mobile phone, a desktop computer, a PC, an all-in-one, and the like.
  • the processor of the terminal is capable of executing a software application and rendering on a display of the terminal to obtain a graphical user interface, the processor, the graphical user interface and the software application being implemented on the game system .
  • the at least one terminal may perform information interaction with the server 101 through a wired network or a wireless network.
  • a one-to-one or many-to-many eg, three to three, five to five
  • the one-to-one application scenario may be that the virtual resource object in the graphical user object that is rendered by the terminal interacts with the information of the virtual resource object preset in the game system (can be understood as a human-machine battle), that is, The information interaction between the terminal and the server; the one-to-one application scenario may also be a virtual resource object in a graphical user object rendered by one terminal and a virtual resource object in a graphical user object rendered by another terminal.
  • the information interaction for example, the virtual resource object in the graphical user object rendered by the terminal 102 interacts with the information of the virtual resource object in the graphical user object rendered by the terminal 103.
  • the multi-to-many application mode scenario takes a three-to-three application mode scenario as an example, and the virtual resource objects in the graphical user objects respectively rendered by the terminal 1, the terminal 2, and the terminal 3 form a first group, and the terminal 4,
  • the virtual resource objects in the graphical user objects respectively rendered by the terminal 5 and the terminal 6 constitute a second group, and the information exchange between the group members of the first group and the group members of the second group.
  • FIG. 1 is only an example of an application architecture that implements an embodiment of the present invention.
  • the embodiment of the present invention is not limited to the application structure described in FIG. 1 above, and various embodiments of the present invention are proposed based on the application architecture.
  • FIG. 2 is a schematic flowchart diagram of an information processing method according to Embodiment 1 of the present invention.
  • the information processing method is applied to a terminal, by executing a software application on a processor of the terminal and rendering on a display of the terminal to obtain a graphical user interface, the processor, the graphical user interface, and the software application Implemented on a gaming system; as shown in Figure 2, the method includes:
  • Step 201 Render at least one virtual resource object on the graphical user interface. At least one of the virtual resource objects is configured to perform a user action object of the first virtual operation in accordance with the input first user command.
  • Step 202 At least one role object deployed in at least one role selection area of the graphical user interface includes at least one window bit.
  • Step 203 When detecting a visual field acquisition gesture of at least one character operation object in the character object, rendering a visual field image captured by the virtual lens associated with at least one of the character operation objects on the graphic user interface .
  • the graphical user interface includes at least one role selection area, where the role selection area includes at least one character object, and the role object includes at least one window bit, wherein at least part of the window bit carries the corresponding role.
  • An operation object wherein the character operation object is represented by an identifier of the role object associated with the role operation object (the identifier may be an avatar) in the graphical user interface; here, the role associated with the role operation object Objects belong to the same group as user role objects.
  • the rendering manner of the character device object in the character selection area includes, but is not limited to, a strip shape and a ring shape, that is, the character object object can be characterized by a character selection bar object or a character selection disk object.
  • FIG. 3 is a first schematic diagram of a graphical user interface of an information processing method according to an embodiment of the present invention
  • a graphical user interface 800 rendered on a display of the terminal includes at least one virtual resource object;
  • the virtual resource object includes at least one user role object a10, and the user of the terminal can perform information interaction through the graphical user interface, that is, input a user command; the user role object a10 can be detected based on the terminal
  • the first user command performs a first virtual operation; the first virtual operation includes, but is not limited to, a mobile operation, a physical attack operation, a skill attack operation, and the like.
  • the user role object a10 is a character object manipulated by a user of the terminal; in the game system, the user role object a10 can perform corresponding in the graphical user interface based on the operation of the user. Actions.
  • the graphical user interface 800 further includes at least one skill object 803, and the user can control the user role object a10 to perform a corresponding skill release operation by a skill release operation.
  • the graphical user interface has a role selection area 802; a role object is deployed in the role selection area 802.
  • the role object is Characterization by the role selection bar object (ie, the character object object presents a strip display effect).
  • the character object includes at least one window bit, and the character operation object associated with the second character object belonging to the same group of the user role object is rendered in a corresponding window bit;
  • the role selection area 802 includes at least one avatar; the at least one avatar respectively corresponds to at least one second role object of the same group of the user role objects. As shown in FIG.
  • the five-to-five application scenario includes four role objects that belong to the same group as the user role object a10, and four role operations in the role selection area 802.
  • This embodiment can be applied to an application scenario involving a multiplayer battle with at least two group members.
  • the mutual positional relationship of at least two role operation objects in the role selection area 802 is determined according to a chronological order in which the at least two role operation objects enter the game system. As shown in FIG. 3, the role object associated with the character operation object a11 enters the game system earlier than the role object associated with the role operation object a12, the role operation object a13 and the role operation object a14 and so on, and details are not described herein again.
  • the virtual user lens associated with the at least one of the character operation objects is captured on the graphic user interface.
  • a view image wherein the view capture gesture may be a long press gesture, a double tap gesture, or the like, and is not limited to the above gesture.
  • the method further includes: generating and transmitting a first instruction, where the first instruction is used to invoke the at least one character, when the visual field acquisition gesture of the at least one character operation object in the character object is detected Manipulating the virtual lens associated with the object and controlling the virtual lens to acquire a view image; and obtaining the virtual mirror during the detection of the view capture gesture The field of view image captured by the head.
  • the terminal when detecting a role operation object in the character selection area 802 (such as the character operation object a11 shown in FIG. 3) a long press gesture, the terminal generates a first instruction, based on the first instruction, establishing a network link of another terminal corresponding to the role object associated with the role operation object, and based on the network link
  • the other terminal corresponding to the role object associated with the role operation object sends the first instruction to control the another terminal to invoke the virtual lens of the other terminal based on the first instruction, and collects by using the virtual lens a field of view image
  • the terminal obtains a field of view image sent by the other terminal in real time, and renders the field of view on the graphic user interface An image; as shown in the enlarged view 801a of the visual field image display area 801 and the visual field image display area 801 shown in FIG.
  • the field of view image of the character object b11 performing a release operation of a skill object can be seen in FIG. It can be understood that, by acquiring the gesture (such as a long press gesture) through the view, the terminal can quickly switch to the view image of the corresponding other terminal, so that the user of the terminal can quickly obtain the view image of the teammate.
  • the role operation object associated with the second role object belonging to the same group as the user role object is configured by the window bit in the role object of the role selection area deployed in the graphical user interface.
  • the rendering is performed in the corresponding window bit, so that the user can quickly obtain the visual field image of the corresponding second character object by acquiring the gesture for the view of the character operation object, thereby greatly improving the operation experience of the user in the interaction process.
  • FIG. 4 is a schematic flowchart diagram of an information processing method according to Embodiment 2 of the present invention.
  • the information processing method is applied to a terminal, by executing a software application on a processor of the terminal and rendering on a display of the terminal to obtain a graphical user interface, the processor, the graphical user interface, and the software application Implemented on the game system; as shown in Figure 4,
  • Step 301 Render at least one virtual resource object on the graphical user interface.
  • Step 302 The at least one role object deployed in the at least one role selection area of the graphical user interface includes at least one window bit.
  • the graphical user interface includes at least one role selection area, where the role selection area includes at least one character object, and the role object includes at least one window bit, wherein at least part of the window bit carries the corresponding role.
  • An operation object wherein the character operation object is represented by an identifier of the role object associated with the role operation object (the identifier may be an avatar) in the graphical user interface; here, the role associated with the role operation object Objects belong to the same group as user role objects.
  • the rendering manner of the character device object in the character selection area includes, but is not limited to, a strip shape and a ring shape, that is, the character object object can be characterized by a character selection bar object or a character selection disk object.
  • At least one virtual resource object is included in the graphical user interface 800 rendered on the display of the terminal; wherein the virtual resource object includes at least one user role object a10, the terminal
  • the user can perform information interaction through the graphical user interface, that is, input a user command; the user role object a10 can perform a first virtual operation based on the first user command detected by the terminal; the first virtual operation includes But not limited to: mobile operations, physical attack operations, skill attack operations, and so on.
  • the user role object a10 is a character object manipulated by a user of the terminal; in the game system, the user role object a10 can be executed in the graphical user interface based on the operation of the user. The corresponding action is taken.
  • the graphical user interface 800 further includes at least one skill object 803, and the user can control the user role object a10 to perform a corresponding skill release operation by a skill release operation.
  • the graphical user interface has a role selection area 802; a role object is deployed in the role selection area 802.
  • the role object is characterized by a role selection bar object.
  • the character object object presents a strip display effect.
  • the character object includes at least one window bit, and the character operation object associated with the second character object belonging to the same group of the user role object is rendered in a corresponding window bit;
  • the role selection area 802 includes at least one avatar; the at least one avatar respectively corresponds to at least one second role object of the same group of the user role objects. As shown in FIG.
  • the five-to-five application scenario includes four role objects that belong to the same group as the user role object a10, and four role operations in the role selection area 802.
  • This embodiment can be applied to an application scenario involving a multiplayer battle with at least two group members.
  • the mutual positional relationship of at least two role operation objects in the role selection area 802 is determined according to a chronological order in which the at least two role operation objects enter the game system. As shown in FIG. 3, the role object associated with the character operation object a11 enters the game system earlier than the role object associated with the role operation object a12, the role operation object a13 and the role operation object a14 and so on, and details are not described herein again.
  • Step 303 When detecting a visual field acquisition gesture of at least one character operation object in the character object object, generating and transmitting a first instruction, and acquiring, in the detection process of the visual field acquisition gesture, the virtual lens acquired a view image; the first instruction is used to invoke the at least one A virtual lens associated with the character manipulation object and controlling the virtual lens acquisition field of view image to render a field of view image captured by the virtual lens associated with at least one of the character manipulation objects on the graphical user interface.
  • the terminal when detecting a role operation object in the character selection area 802 (such as the character operation object a11 shown in FIG. 3) a long press gesture, the terminal generates a first instruction, based on the first instruction, establishing a network link of another terminal corresponding to the role object associated with the role operation object, and based on the network link
  • the other terminal corresponding to the role object associated with the role operation object sends the first instruction to control the another terminal to invoke the virtual lens of the other terminal based on the first instruction, and collects by using the virtual lens a field of view image
  • the terminal obtains a field of view image sent by the other terminal in real time, and renders the field of view on the graphic user interface An image; as shown in the enlarged view 801a of the visual field image display area 801 and the visual field image display area 801 shown in FIG.
  • the field of view image of the character object b11 performing a release operation of a skill object can be seen in FIG. It can be understood that, by acquiring the gesture (such as a long press gesture) through the view, the terminal can quickly switch to the view image of the corresponding other terminal, so that the user of the terminal can quickly obtain the view image of the teammate.
  • Step 304 When the view acquisition gesture is terminated, generating a second instruction to terminate a call to the virtual shot associated with the at least one character operation object based on the second instruction.
  • the role operation object associated with the second role object belonging to the same group as the user role object is configured by the window bit in the role object of the role selection area deployed in the graphical user interface.
  • the rendering is performed in the corresponding window bit, so that the user can quickly obtain the visual field image of the corresponding second character object by acquiring the gesture for the view of the character operation object, thereby greatly improving the operation experience of the user in the interaction process.
  • FIG. 5 is a schematic flowchart diagram of an information processing method according to Embodiment 3 of the present invention.
  • the information processing method is applied to a terminal, by executing a software application on a processor of the terminal and rendering on a display of the terminal to obtain a graphical user interface, the processor, the graphical user interface, and the software application Implemented on the game system; as shown in Figure 5,
  • Step 401 Render at least one virtual resource object on the graphical user interface.
  • Step 402 The at least one role object deployed in the at least one role selection area of the graphical user interface includes at least one window bit.
  • the graphical user interface includes at least one role selection area, where the role selection area includes at least one character object, and the role object includes at least one window bit, wherein at least part of the window bit carries the corresponding role.
  • An operation object wherein the character operation object is represented by an identifier of the role object associated with the role operation object (the identifier may be an avatar) in the graphical user interface; here, the role associated with the role operation object Objects belong to the same group as user role objects.
  • the rendering manner of the character device object in the character selection area includes, but is not limited to, a strip shape and a ring shape, that is, the character object object can be characterized by a character selection bar object or a character selection disk object.
  • the interface 800 includes at least one virtual resource object, wherein the virtual resource object includes at least one user role object a10, and the user of the terminal can perform information interaction through the graphical user interface, that is, input a user command;
  • the user role object a10 can perform a first virtual operation based on the first user command detected by the terminal; the first virtual operation includes, but is not limited to, a mobile operation, a physical attack operation, a skill attack operation, and the like.
  • the user role object a10 is a character object manipulated by a user of the terminal; in the game system, the user role object a10 can perform corresponding in the graphical user interface based on the operation of the user. Actions.
  • the graphical user interface 800 further includes at least one skill object 803, and the user can control the user role object a10 to perform a corresponding skill release operation by a skill release operation.
  • the graphical user interface has a role selection area 802; a role object is deployed in the role selection area 802.
  • the role object is characterized by a role selection bar object.
  • the character object object presents a strip display effect.
  • the character object includes at least one window bit, and the character operation object associated with the second character object belonging to the same group of the user role object is rendered in a corresponding window bit;
  • the role selection area 802 includes at least one avatar; the at least one avatar respectively corresponds to at least one second role object of the same group of the user role objects. As shown in FIG.
  • the five-to-five application scenario includes four role objects that belong to the same group as the user role object a10, and four role operations in the role selection area 802.
  • This embodiment can be applied to an application scenario involving a multiplayer battle with at least two group members.
  • At least two role operation pairs in the role selection area 802 The mutual positional relationship of the images is determined in accordance with the chronological order in which the at least two character operation objects enter the game system. As shown in FIG. 3, the role object associated with the character operation object a11 enters the game system earlier than the role object associated with the role operation object a12, the role operation object a13 and the role operation object a14 and so on, and details are not described herein again.
  • Step 403 When detecting a visual field acquisition gesture of at least one character operation object in the character object, rendering a visual field image captured by the virtual lens associated with at least one of the character operation objects on the graphic user interface. .
  • the method further includes: generating and transmitting a first instruction, where the first instruction is used to invoke the at least one character, when the visual field acquisition gesture of the at least one character operation object in the character object is detected Manipulating the virtual lens associated with the object and controlling the virtual lens to acquire a field of view image; and during the detecting of the field of view acquisition gesture, obtaining a field of view image acquired by the virtual lens.
  • the terminal when detecting a role operation object in the character selection area 802 (such as the character operation object a11 shown in FIG. 3) a long press gesture, the terminal generates a first instruction, based on the first instruction, establishing a network link of another terminal corresponding to the role object associated with the role operation object, and based on the network link
  • the other terminal corresponding to the role object associated with the role operation object sends the first instruction to control the another terminal to invoke the virtual lens of the other terminal based on the first instruction, and collects by using the virtual lens a field of view image
  • the terminal obtains a field of view image sent by the other terminal in real time, and renders the field of view on the graphic user interface An image; as shown in the enlarged view 801a of the visual field image display area 801 and the visual field image display area 801 shown in FIG.
  • the associated role object c11 is currently performing a release operation of the skill object toward the other character object b11, and the view image display area 801 of the graphical user interface 800 displays the current role c11 associated with the role operation object a11.
  • a field of view image of a release operation of a skill object is being performed toward another character object b11, as shown in FIG. It can be understood that, by acquiring the gesture (such as a long press gesture) through the view, the terminal can quickly switch to the view image of the corresponding other terminal, so that the user of the terminal can quickly obtain the view image of the teammate.
  • a second instruction is generated to terminate a call to the virtual shot associated with the at least one character operation object based on the second instruction.
  • Step 404 Continuously record the change of the state attribute of the user role object in the graphical user interface, generate state attribute information of the user role object, and synchronously update the state attribute information to the server.
  • Step 405 Obtain state attribute information of the at least one role object associated with the at least one role operation object from the server, and the state attribute information is corresponding to the associated role operation object according to the first preset display mode. Rendering is performed in at least one of the window bits.
  • the terminal continuously records the change of the state attribute of the user role object in the graphical user interface, that is, in the process of information interaction between the user role object and other role objects, the terminal records in real time.
  • the state attribute information of the user role object is changed, thereby obtaining state attribute information of the user role object; the state attribute information includes, but is not limited to, a blood volume value, a health value or skill attribute information of the user role object.
  • the terminal synchronizes the obtained state attribute information of the user role object to the server in real time.
  • the second role for at least one second role object belonging to the same group as the user role object, the second role
  • the terminal corresponding to the object also acquires state attribute information of the second role object to the server in real time.
  • the terminal obtains state attribute information of the at least one second role object synchronized by the other terminal from the server, that is, obtains at least one role operation object in the character device object in the graphic user interface.
  • the state attribute information of the associated at least one role object may be understood as: the terminal obtains state attribute information of the second role object belonging to the same group as the user role object; and the state attribute of the second role object The information is rendered in at least one of the window bits corresponding to the associated character operation object in a first preset display manner.
  • 6 is a second schematic diagram of a graphical user interface of an information processing method according to an embodiment of the present invention; as shown in FIG.
  • the state attribute information is a blood volume value
  • the role in the role selection area 802 The outer ring region of the operation object a21 serves as a blood cell display region a211, and the current blood volume value of the corresponding second character object is represented by the proportional relationship of the blood volume in the blood cell display region in the blood cell display region.
  • the manner in which the state attribute information in the embodiment of the present invention renders the role operation object associated with the second role object in the corresponding window bit is not limited to that shown in FIG. 6.
  • the role associated with the second role object belonging to the same group as the user role object is configured by the window bit in the role object of the role selection area deployed in the graphical user interface.
  • the operation object is rendered in the corresponding window bit, so that the user can quickly obtain the visual field image of the corresponding second character object by acquiring the gesture for the view of the character operation object, thereby greatly improving the operation experience of the user in the interaction process.
  • state attribute information of the second role object associated with the role operation object in the character object object is obtained by synchronizing state attribute information of the second role object (ie, teammate) belonging to the same group, and The state attribute information is rendered in a corresponding window position in a specific manner, that is, the state attribute information of the second character object (ie, teammate) is reflected on the corresponding character operation object (UI avatar), so that the user can quickly learn the second role.
  • the state of the object (ie teammate) Attribute information improves the user's operating experience during the information interaction process.
  • FIG. 7 is a schematic flowchart diagram of an information processing method according to Embodiment 4 of the present invention.
  • the information processing method is applied to a terminal, by executing a software application on a processor of the terminal and rendering on a display of the terminal to obtain a graphical user interface, the processor, the graphical user interface, and the software application Implemented on the game system; as shown in Figure 7,
  • Step 501 Render at least one virtual resource object on the graphical user interface.
  • Step 502 The at least one role object deployed in the at least one role selection area of the graphical user interface includes at least one window bit.
  • the graphical user interface includes at least one role selection area, where the role selection area includes at least one character object, and the role object includes at least one window bit, wherein at least part of the window bit carries the corresponding role.
  • An operation object wherein the character operation object is represented by an identifier of the role object associated with the role operation object (the identifier may be an avatar) in the graphical user interface; here, the role associated with the role operation object Objects belong to the same group as user role objects.
  • the rendering manner of the character device object in the character selection area includes, but is not limited to, a strip shape and a ring shape, that is, the character object object can be characterized by a character selection bar object or a character selection disk object.
  • At least one virtual resource object is included in the graphical user interface 800 rendered on the display of the terminal; wherein the virtual resource object includes at least one user role object a10, the terminal
  • the user can perform information interaction through the graphical user interface, that is, input a user command; the user role object a10 can perform a first virtual operation based on the first user command detected by the terminal; the first virtual operation includes But not limited to: mobile operations, physical attack operations, skill attack operations, and so on.
  • the user role object a10 is a character object manipulated by a user of the terminal; in the game system, the The user role object a10 is capable of performing a corresponding action in the graphical user interface based on the user's operation.
  • the graphical user interface 800 further includes at least one skill object 803, and the user can control the user role object a10 to perform a corresponding skill release operation by a skill release operation.
  • the graphical user interface has a role selection area 802; a role object is deployed in the role selection area 802.
  • the role object is characterized by a role selection bar object.
  • the character object object presents a strip display effect.
  • the character object includes at least one window bit, and the character operation object associated with the second character object belonging to the same group of the user role object is rendered in a corresponding window bit;
  • the role selection area 802 includes at least one avatar; the at least one avatar respectively corresponds to at least one second role object of the same group of the user role objects. As shown in FIG.
  • the five-to-five application scenario includes four role objects that belong to the same group as the user role object a10, and four role operations in the role selection area 802.
  • This embodiment can be applied to an application scenario involving a multiplayer battle with at least two group members.
  • the mutual positional relationship of at least two role operation objects in the role selection area 802 is determined according to a chronological order in which the at least two role operation objects enter the game system. As shown in FIG. 3, the role object associated with the character operation object a11 enters the game system earlier than the role object associated with the role operation object a12, the role operation object a13 and the role operation object a14 and so on, and details are not described herein again.
  • Step 503 When detecting a view acquiring gesture of at least one character operation object in the character object, rendering, on the graphic user interface, at least one of the character operation objects The view image captured by the associated virtual lens.
  • the method further includes: generating and transmitting a first instruction, where the first instruction is used to invoke the at least one character, when the visual field acquisition gesture of the at least one character operation object in the character object is detected Manipulating the virtual lens associated with the object and controlling the virtual lens to acquire a field of view image; and during the detecting of the field of view acquisition gesture, obtaining a field of view image acquired by the virtual lens.
  • the terminal when detecting a role operation object in the character selection area 802 (such as the character operation object a11 shown in FIG. 3) a long press gesture, the terminal generates a first instruction, based on the first instruction, establishing a network link of another terminal corresponding to the role object associated with the role operation object, and based on the network link
  • the other terminal corresponding to the role object associated with the role operation object sends the first instruction to control the another terminal to invoke the virtual lens of the other terminal based on the first instruction, and collects by using the virtual lens a field of view image
  • the terminal obtains a field of view image sent by the other terminal in real time, and renders the field of view on the graphic user interface An image; as shown in the enlarged view 801a of the visual field image display area 801 and the visual field image display area 801 shown in FIG.
  • the field of view image of the character object b11 performing a release operation of a skill object can be seen in FIG. It can be understood that, by acquiring the gesture (such as a long press gesture) through the view, the terminal can quickly switch to the view image of the corresponding other terminal, so that the user of the terminal can quickly obtain the view image of the teammate.
  • a second instruction is generated to terminate a call to the virtual shot associated with the at least one character operation object based on the second instruction.
  • Step 504 continuously record changes of state attributes of the user role object in the graphical user interface, generate state attribute information of the user role object, and continuously record changes of skill attributes of the user role object in the graphical user interface. And determining that the skill attribute of the user role object reaches a preset condition, generating skill attribute information of the user role object; and updating the state attribute information and the skill attribute information to the server.
  • the terminal continuously records the change of the state attribute of the user role object in the graphical user interface, that is, in the process of performing information interaction between the user role object and other role objects,
  • the terminal records the change of the state attribute of the user role object in real time, thereby obtaining state attribute information of the user role object;
  • the state attribute information includes, but is not limited to, a blood volume value, a health value or a skill attribute of the user role object. information.
  • the terminal synchronizes the obtained state attribute information of the user role object to the server in real time.
  • the terminal corresponding to the second role object also acquires state attribute information of the second role object to the server in real time.
  • the terminal continuously records the change of the skill attribute of the user role object in the graphical user interface, that is, in the process of information interaction between the user role object and other role objects, the terminal records the real-time in the terminal.
  • the change of the skill attribute of the user role object since the user character object needs to recover after a period of time after releasing a skill object, the skill object can be restored again after the period of time Release
  • the terminal records the change of the skill attribute of the user role object in real time, determines that the at least one skill object can be released, determines that the skill attribute of the user role object reaches a preset condition, and generates the user role object. Skill attribute information that characterizes the user character object is capable of releasing at least one skill object.
  • the terminal synchronizes the acquired skill attribute information of the user role object to the server in real time.
  • the terminal corresponding to the second role object also acquires the skill attribute information of the second role object to the server in real time.
  • Step 505 Obtain state attribute information and skill attribute information of the at least one role object associated with the at least one role operation object from the server, and use the state attribute information in the first preset display manner in the associated role. Performing rendering in at least one of the window bits corresponding to the operation object, and rendering the skill attribute information in at least one of the window positions corresponding to the associated character operation object in a second preset display manner.
  • the terminal obtains state attribute information of the at least one second role object synchronized by the other terminal from the server, that is, obtains at least one role of the character object in the graphical user interface.
  • the state attribute information of the at least one role object associated with the operation object may be understood as: the terminal obtains state attribute information of the second role object belonging to the same group as the user role object; and the second role object
  • the state attribute information is rendered in at least one of the window bits corresponding to the associated character operation object in a first preset display manner.
  • 8 is a third schematic diagram of a graphical user interface of an information processing method according to an embodiment of the present invention; as shown in FIG.
  • the state attribute information is a blood volume value
  • the role in the role selection area 802 The outer ring region of the operation object a31 serves as a blood cell display region a311, and the current blood volume value of the corresponding second character object is represented by the proportional relationship of the blood volume in the blood cell display region a311 in the blood cell display region a311.
  • the manner in which the state attribute information in the embodiment of the present invention renders the role operation object associated with the second role object in the corresponding window bit is not limited to that shown in FIG. 8.
  • the terminal obtains the skill attribute information of the at least one second role object synchronized by the other terminal from the server, that is, obtains at least one role operation in the character device object in the graphic user interface.
  • the skill attribute information of the at least one role object associated with the object may be understood as: the terminal obtains skill attribute information of the second role object belonging to the same group as the user role object; and the skill of the second role object
  • the attribute information is rendered in at least one of the window positions corresponding to the associated role operation object according to the first preset display mode; and the skill attribute information is displayed in the character operation object to indicate that the corresponding second role object is currently released.
  • At least one skill object Referring to FIG.
  • the skill attribute information is represented by a circular identifier a312 in the upper right corner of the character operation object a31 in the character object; when the character operation object displays the circular identifier a312, it indicates The second character object associated with the character operation object is currently capable of releasing at least one skill object; when the character operation object does not display the circular identifier, indicating that the second role object associated with the role operation object is not currently Ability to release any skill objects.
  • the manner in which the state attribute information in the embodiment of the present invention renders the role operation object associated with the second role object in the corresponding window bit is not limited to that shown in FIG. 8.
  • the role associated with the second role object belonging to the same group as the user role object is configured by the window bit in the role object of the role selection area deployed in the graphical user interface.
  • the operation object is rendered in the corresponding window bit, so that the user can quickly obtain the visual field image of the corresponding second character object by acquiring the gesture for the view of the character operation object, thereby greatly improving the operation experience of the user in the interaction process.
  • the state attribute of the second role object associated with the role operation object in the character object is obtained by synchronizing state attribute information and skill attribute information of the second character object (ie, teammate) belonging to the same group.
  • Information and skill attribute information, and the state attribute information and the skill attribute information are rendered in a corresponding window position in a specific manner, that is, the state attribute information and the skill attribute information of the second role object (ie, teammates) are reflected in the corresponding role.
  • Operation object UI header
  • the user can quickly know the state attribute information and the skill attribute information of the second character object (ie, teammate), and enhance the user's operation experience in the information interaction process.
  • FIG. 9 is a schematic diagram of an interaction application of an information processing method according to an embodiment of the present invention. As shown in FIG. 9, in the application scenario, the terminal 1, the terminal 2, the terminal 3, the terminal 4, and the server 5 are included, and the terminal 1 passes the user.
  • the triggering operation is performed by the user 2; the terminal 2 performs a triggering operation by the user 3; the terminal 4 performs a triggering operation by the user 4; the method includes:
  • Step 11 User 1 can trigger the game system and log in to the authentication information, which can be a username and password.
  • Step 12 The terminal 1 transmits the obtained authentication information to the server 5, and the server 5 performs identity verification, and after the identity verification is passed, returns to the first graphical user interface to the terminal 1;
  • the first graphical user interface includes a first role object capable of performing a virtual operation based on a triggering operation of the user 1, the virtual operation including a moving operation of the first character object, the first character Objects attack operations or skill release operations for other character objects, and so on.
  • Step 21 User 2 can trigger the game system and log in to the authentication information, which can be a username and password.
  • Step 22 The terminal 2 transmits the obtained authentication information to the server 5, and the server 5 performs identity verification, and after the identity verification is passed, returns to the second graphical user interface to the terminal 2;
  • the second graphical user interface includes a second role object, and the second role object is capable of performing a virtual operation based on a trigger operation of the user 2, the virtual operation including The moving operation of the second character object, the attacking operation or the skill releasing operation of the second character object for other character objects, and the like.
  • the first role object rendered in the terminal 1 and the second role object rendered in the terminal 2 belong to the same group, and the role object object of the first graphical user interface in the terminal 1
  • the window bit includes a character operation object associated with the second character object; when the character operation object is operated by a field of view acquisition gesture (such as a long press gesture), the terminal 1 can invoke the virtual lens of the terminal 2, Thereby, the visual field image of the terminal 2 can be obtained by the virtual lens, and the terminal 1 displays the visual field image when the visual field acquisition gesture (such as a long press gesture) continues to operate.
  • the window bit of the character device object of the second graphical user interface in the terminal 2 includes a role operation object associated with the first role object, which is similar to the terminal 1 in the terminal 2
  • the terminal 2 can invoke the virtual lens of the terminal 1 so that the view image of the terminal 1 can be obtained through the virtual lens, and no longer Narration.
  • Step 31 User 3 can trigger the game system and log in to the authentication information, which can be a username and password.
  • Step 32 The terminal 3 transmits the obtained authentication information to the server 5, and the server 5 performs identity verification, and after the identity verification is passed, returns to the third graphical user interface to the terminal 3;
  • the third graphical user interface includes a third role object capable of performing a virtual operation based on a trigger operation of the user 3, the virtual operation including a movement operation of the third role object, the third role Objects attack operations or skill release operations for other character objects, and so on.
  • Step 41 User 4 can trigger the game system and log in to the authentication information, which can be a username and password.
  • Step 42 The terminal 4 transmits the obtained authentication information to the server 5, and the server 5 performs identity verification, and returns to the fourth graphical user interface after the identity verification is passed.
  • the fourth graphical user interface includes a fourth role object, the fourth role object being capable of performing a virtual operation based on a triggering operation of the user 5, the virtual operation including the fourth role object
  • both the terminal 3 and the terminal 4 render a role operation object associated with another role object belonging to the same group; when detecting When the view of the character operation object is acquired, such as a long press gesture, the view image of the character object associated with the character operation object is obtained, and details are not described herein.
  • the role objects in the first group can be made based on a triggering operation.
  • the role object in the second group as an object of information interaction.
  • step 13 user 1 triggers a first graphical user interface presented by the terminal 1, the triggering operation may be for any virtual resource object in the first graphical user interface, including for any skill object Skill release operation, information interaction for any role object (which can be understood as a physical attack operation), movement of the first character object, and the like.
  • the triggering operation is a field of view acquisition gesture operation for a character operation object in the character object of the first graphical user interface.
  • Step 14 When the terminal 1 acquires the triggering operation, it identifies an instruction corresponding to the triggering operation gesture, and executes the instruction; for example, executing a skill release instruction for the corresponding operation object, and executing an information interaction instruction for the corresponding role object (eg, Physical attack instructions), execution of move instructions, and so on. And, in the process of executing the instruction, the change of the corresponding data is recorded.
  • an instruction corresponding to the triggering operation gesture for example, executing a skill release instruction for the corresponding operation object, and executing an information interaction instruction for the corresponding role object (eg, Physical attack instructions), execution of move instructions, and so on.
  • the change of the corresponding data is recorded.
  • step 15 the changed data is synchronized to the server 5 as the first data corresponding to the terminal 1.
  • step 23 user 2 triggers a second graphical user interface presented by the terminal 2, the triggering operation may be for any virtual resource object in the second graphical user interface, including for any skill object Skill release operation, information interaction for any role object (which can be understood as a physical attack operation), movement of the second character object, and the like.
  • the triggering operation is a field of view acquisition gesture operation for a character operation object in the character device object of the second graphical user interface.
  • Step 24 When the terminal 2 acquires the triggering operation, it identifies an instruction corresponding to the triggering operation gesture, and executes the instruction; for example, executing a skill release instruction for the corresponding operation object, and executing an information interaction instruction for the corresponding role object (eg, Physical attack instructions), execution of move instructions, and so on. And, in the process of executing the instruction, the change of the corresponding data is recorded.
  • an instruction corresponding to the triggering operation gesture for example, executing a skill release instruction for the corresponding operation object, and executing an information interaction instruction for the corresponding role object (eg, Physical attack instructions), execution of move instructions, and so on.
  • the change of the corresponding data is recorded.
  • step 25 the changed data is synchronized to the server 5 as the second data corresponding to the terminal 2.
  • step 33 user 3 triggers a third graphical user interface presented by the terminal 3, the triggering operation may be for any virtual resource object in the third graphical user interface, including for any skill object Skill release operation, information interaction for any role object (which can be understood as a physical attack operation), movement of the third character object, and the like.
  • the triggering operation is a field of view acquisition gesture operation for a character operation object in the character device object of the third graphical user interface.
  • Step 34 When the terminal 3 acquires the triggering operation, it identifies an instruction corresponding to the triggering operation gesture, and executes the instruction; for example, executing a skill release instruction for the corresponding operation object, and executing an information interaction instruction for the corresponding role object (eg, Physical attack instructions), execution of move instructions, and so on. And, in the process of executing the instruction, the change of the corresponding data is recorded.
  • an instruction corresponding to the triggering operation gesture for example, executing a skill release instruction for the corresponding operation object, and executing an information interaction instruction for the corresponding role object (eg, Physical attack instructions), execution of move instructions, and so on.
  • the change of the corresponding data is recorded.
  • step 35 the changed data is synchronized to the server 5 as the third data corresponding to the terminal 3.
  • step 43 the fourth graphical user interface presented by the user 4 to the terminal 4.
  • a triggering operation which may be directed to any virtual resource object in the fourth graphical user interface, including a skill release operation for any skill object, an information interaction operation for any role object (which may be understood as a physical attack operation)
  • the movement operation of the fourth character object and the like.
  • the triggering operation is a field of view acquisition gesture operation for a character operation object in the character device object of the fourth graphical user interface.
  • Step 44 When the terminal 4 acquires the triggering operation, it identifies an instruction corresponding to the triggering operation gesture, and executes the instruction; for example, executing a skill release instruction on the corresponding operation object, and executing an information interaction instruction for the corresponding role object (eg, Physical attack instructions), execution of move instructions, and so on. And, in the process of executing the instruction, the change of the corresponding data is recorded.
  • an instruction corresponding to the triggering operation gesture for example, executing a skill release instruction on the corresponding operation object, and executing an information interaction instruction for the corresponding role object (eg, Physical attack instructions), execution of move instructions, and so on.
  • an information interaction instruction for the corresponding role object eg, Physical attack instructions
  • step 45 the changed data is synchronized to the server 5 as the fourth data corresponding to the terminal 4.
  • step 50 based on the first data synchronized by the terminal 1, the second data synchronized by the terminal 2, the third data synchronized by the terminal 3, and the fourth data synchronized by the terminal 4, the data is updated, and the updated data is respectively Synchronizing to the terminal 1, the terminal 2, the terminal 3, and the terminal 4.
  • the application scenario relates to a Multiplayer Online Battle Arena Games (MOBA).
  • MOBA Multiplayer Online Battle Arena Games
  • the technical terms involved in MOBA are: 1) UI layer: the icon in the graphical user interface; 2) skill indicator: special effects, aperture, operation to assist the release of skills; 3) virtual lens: can be understood as a game The camera; 4) Small map: The reduced version of the big map can be understood as a radar map, and the information and location of the enemy will be displayed on the map.
  • FIG. 10 is a fourth schematic diagram of a graphical user interface in an information processing method according to an embodiment of the present invention; the present application is based on an application scenario of an actual interaction process.
  • the graphical user interface 90 rendered in this embodiment includes a character selection area 92, and the character selection area 92 includes a character object; the character object includes four window positions in the present illustration.
  • Each window The bits respectively render a role operation object, including a role operation object 921, a role operation object 922, a role operation object 923, and a role operation object 924; each role operation object is associated with a role object; four role objects and user roles Objects belong to the same group.
  • the graphical user interface 90 further includes an area 91; when the view acquisition gesture of any of the character operation objects in the character selection area 92 is not detected, the area 91 renders the deployment layout of the enemy and the enemy. Small map (see FIG. 10); when detecting a view acquisition gesture (such as a long press gesture) for any character operation object (such as the character operation object 921) in the character selection area 92, the terminal passes Directing a virtual shot corresponding to the character object associated with the character operation object 921, controlling the virtual lens to collect a field of view image and returning to the graphical user interface 90 of the terminal; the area 91 rendering a corresponding character operation object A field of view image of the associated character object of 921 (not shown in FIG. 10). In this way, the user can quickly obtain the view image of the corresponding second character object by acquiring the gesture for the view of the character operation object, thereby greatly improving the operation experience of the user in the interaction process.
  • a view acquisition gesture such as a long press gesture
  • the terminal passes Directing a virtual
  • the embodiment of the invention further provides a terminal.
  • 11 is a schematic structural diagram of a terminal according to Embodiment 5 of the present invention; as shown in FIG. 11, the terminal includes: a rendering processing unit 61, a deployment unit 62, a detecting unit 63, and an operation executing unit 64;
  • the rendering processing unit 61 is configured to execute a software application and render to obtain a graphical user interface; render at least one virtual resource object on the graphical user interface; and further configured to render the operation on the graphical user interface a view image captured by the virtual lens associated with the at least one of the character operation objects obtained by the execution unit 64;
  • the deployment unit 62, the at least one role object configured to be deployed in the at least one role selection area of the graphical user interface includes at least one window bit;
  • the detecting unit 63 is configured to detect a view acquiring gesture for at least one character operating object in the character object;
  • the operation execution unit 64 is configured to obtain a virtual lens associated with at least one of the character operation objects when the detection unit 63 detects a field of view acquisition gesture for at least one of the character object objects Captured field of view image.
  • the graphical user interface includes at least one role selection area, where the role selection area includes at least one character object, and the role object includes at least one window bit, wherein at least part of the window bit carries the corresponding role.
  • An operation object wherein the character operation object is represented by an identifier of the role object associated with the role operation object (the identifier may be an avatar) in the graphical user interface; here, the role associated with the role operation object Objects belong to the same group as user role objects.
  • the rendering manner of the character device object in the character selection area includes, but is not limited to, a strip shape and a ring shape, that is, the character object object can be characterized by a character selection bar object or a character selection disk object.
  • the graphical user interface 800 rendered by the rendering processing unit 61 includes at least one virtual resource object; wherein the virtual resource object includes at least one user role object a10, the terminal
  • the user can perform information interaction through the graphical user interface, that is, input a user command; the user role object a10 can perform a first virtual operation based on the first user command detected by the terminal; the first virtual operation includes but Not limited to: mobile operations, physical attack operations, skill attack operations, and so on.
  • the user role object a10 is a character object manipulated by a user of the terminal; in the game system, the user role object a10 can perform corresponding in the graphical user interface based on the operation of the user. Actions.
  • the graphical user interface 800 further includes at least one skill object 803, and the user can control the user character object a10 to perform a corresponding skill release operation through a skill release operation.
  • the deployment unit 62 deploys a role selection area 802 in the graphical user interface; a role object is deployed in the role selection area 802.
  • the role object is Characterization by the role selection bar object (ie, the character object object presents a strip display effect).
  • the roler object includes at least one window bit that belongs to the user role object
  • the role operation object associated with the second role object of the same group is rendered in the corresponding window bit;
  • the character operation object is represented by the avatar representation as an example, that is, the role selection area 802 includes at least one avatar;
  • the at least one avatar is in one-to-one correspondence with the at least one second role object of the same group of the user role objects.
  • the five-to-five application scenario includes four role objects that belong to the same group as the user role object a10, and four role operations in the role selection area 802.
  • This embodiment can be applied to an application scenario involving a multiplayer battle with at least two group members.
  • the mutual positional relationship of at least two role operation objects in the role selection area 802 is determined according to a chronological order in which the at least two role operation objects enter the game system. As shown in FIG. 3, the role object associated with the character operation object a11 enters the game system earlier than the role object associated with the role operation object a12, the role operation object a13 and the role operation object a14 and so on, and details are not described herein again.
  • the operation executing unit 64 is configured to generate and send a first instruction when the detecting unit 63 detects a view acquiring gesture of at least one character operating object in the character object, the first The instruction is configured to invoke the virtual lens associated with the at least one character operation object and control the virtual lens to acquire a field of view image; and the detecting unit 63 obtains the virtual lens acquisition during the detection of the view field acquisition gesture View image.
  • the operation execution unit 64 when the detecting unit 63 detects a role operation object in the character selection area 802 (as shown in FIG. 3 ) When the character operates the long-press gesture of the object a11), the operation execution unit 64 generates a first instruction, based on the first instruction, to establish a network link of another terminal corresponding to the role object associated with the role operation object, And operating the object to the role based on the network link The other terminal corresponding to the associated role object sends the first instruction to control the another terminal to invoke the virtual lens of the other terminal based on the first instruction, and collects a view image through the virtual lens; In the process that the detecting unit 63 continuously detects the long press gesture for the character operation object a11, the operation executing unit 64 obtains the visual field image sent by the other terminal in real time, and renders on the graphic user interface.
  • the view image is displayed; as shown in the enlarged view 801a of the view image display area 801 and the view image display area 801 shown in FIG. 3, the view image corresponding to the character operation object a11 is displayed on the view image.
  • the view image is an image that the manipulation user of the role object associated with the character operation object a11 can browse; for example, the role object c11 associated with the character operation object a11 is currently facing another role
  • the object b11 performs a release operation of a skill object
  • the visual field image display area 801 of the graphical user interface 800 is displayed to include the Color character object associated with the operation target a11 c11 toward the other character objects are in view image b11 for a release operation skills object, see FIG. 3. It can be understood that, by acquiring the gesture (such as a long press gesture) through the view, the terminal can quickly switch to the view image of the corresponding other terminal, so that the user of the terminal can quickly obtain the view image of the teammate.
  • the gesture such as a long press
  • the operation executing unit 64 is further configured to: when the detecting unit 63 detects that the view capturing gesture is terminated, generate a second instruction, and terminate the at least one role based on the second instruction The call to the virtual shot associated with the action object.
  • the operation executing unit 64 when the detecting unit 63 detects that the long press gesture is terminated, the operation executing unit 64 generates a second instruction, which is terminated based on the second instruction. Invoking a virtual shot associated with the at least one character operation object, and terminating a network link of the terminal with the other terminal.
  • the functions of the processing units in the terminal of the embodiment of the present invention can be understood by referring to the related description of the information processing method, and the processing units in the information processing terminal according to the embodiments of the present invention can implement the present invention.
  • an embodiment of the present invention further provides a terminal.
  • 12 is a schematic structural diagram of a terminal of a sixth embodiment of the present invention; as shown in FIG. 12, the terminal includes: a rendering processing unit 61, a deployment unit 62, a detecting unit 63, an operation executing unit 64, and a communication unit 65;
  • the rendering processing unit 61 is configured to execute a software application and render to obtain a graphical user interface; render at least one virtual resource object on the graphical user interface; and further configured to render the operation on the graphical user interface a view image captured by the virtual lens associated with the at least one of the character operation objects obtained by the execution unit 64; further configured to correlate the state attribute information obtained by the operation execution unit 64 in a first preset display manner Rendering in at least one of the window bits corresponding to the associated character operation object;
  • the deployment unit 62, the at least one role object configured to be deployed in the at least one role selection area of the graphical user interface includes at least one window bit;
  • the detecting unit 63 is configured to detect a view acquiring gesture for at least one character operating object in the character object;
  • the operation execution unit 64 is configured to obtain a virtual lens associated with at least one of the character operation objects when the detection unit 63 detects a field of view acquisition gesture for at least one of the character object objects a captured view image; configured to continuously record a change in a state attribute of the user character object in the graphical user interface, generate state attribute information of the user role object, and synchronize the state attribute information through the communication unit 65 Updating to the server; further configured to obtain, by the communication unit 65, status attribute information of the at least one role object associated with the at least one role operation object from the server.
  • the functions of the rendering processing unit 61, the deployment unit 62, the detecting unit 63, and the operation executing unit 64 may be referred to the description of Embodiment 5, and details are not described herein again.
  • the operation execution unit 64 continuously records the change of the state attribute of the user role object in the graphical user interface, that is, in the process of information interaction between the user role object and other role objects.
  • the terminal records the change of the state attribute of the user role object in real time, thereby obtaining state attribute information of the user role object; the status attribute information includes, but is not limited to, a blood volume value, a health value or a skill of the user role object. Attribute information.
  • the operation execution unit 64 synchronizes the acquired state attribute information of the user role object to the server through the communication unit 65 in real time.
  • the terminal corresponding to the second role object also acquires state attribute information of the second role object to the server in real time.
  • the operation execution unit 64 obtains state attribute information of the at least one second role object synchronized by the other terminal from the server by using the communication unit 65, that is, obtains a role in the graphic user interface.
  • the state attribute information of the at least one character object associated with the at least one character operation object in the object object is understood to be that the operation execution unit 64 obtains the state attribute of the second role object belonging to the same group as the user role object. And displaying the state attribute information of the second character object in at least one of the window bits corresponding to the associated character operation object according to the first preset display manner. Referring to FIG.
  • the state attribute information is a blood volume value
  • an outer ring region of a character operation object in the character device object is used as a blood cell display region, and blood volume is displayed in the blood cell through the blood cell.
  • the proportional relationship in the blood cell display area characterizes the current blood volume value of the corresponding second character object.
  • the manner in which the state attribute information in the embodiment of the present invention renders the role operation object associated with the second role object in the corresponding window bit is not limited to that shown in FIG. 6.
  • the operation execution unit 64 is further configured to continuously record a change of the skill attribute of the user role object in the graphical user interface, and determine that the skill attribute of the user role object reaches a preset condition, and generate The skill attribute information of the user role object is synchronously updated to the server by the communication unit 65;
  • the communication unit 65 obtains, from the server, skill attribute information of at least one role object associated with the at least one role operation object;
  • the rendering processing unit 61 is further configured to perform the skill attribute information obtained by the operation executing unit 64 in at least one of the window positions corresponding to the associated character operation object in a second preset display manner. Rendering.
  • the operation execution unit 64 continuously records the change of the skill attribute of the user role object in the graphical user interface, that is, in the process of performing information interaction between the user role object and other role objects, the operation is performed.
  • the unit 64 records the change of the skill attribute of the user role object in real time; since the user role object needs to recover after a period of time after releasing a skill object, the skill object can be recovered, that is, after the time period, the skill The object can be released again; in this embodiment, the operation execution unit 64 records the change of the skill attribute of the user role object in real time, and determines that the at least one skill object can be released, and determines the user role object.
  • the skill attribute reaches a preset condition, and the skill attribute information of the user role object is generated, and the skill attribute information indicates that the user role object can release at least one skill object.
  • the operation execution unit 64 synchronizes the acquired skill attribute information of the user role object to the server through the communication unit 65 in real time.
  • the terminal corresponding to the second role object also acquires the skill attribute information of the second role object to the server in real time.
  • the operation execution unit 64 obtains the skill attribute information of the at least one second role object synchronized by the other terminal from the server through the communication unit 65, that is, obtains the role object object in the graphic user interface.
  • the skill attribute information of the at least one character object associated with the at least one role operation object may be understood as: the operation execution unit 64 obtains the skill attribute information of the second role object belonging to the same group as the user role object; The skill attribute information of the second role object is rendered in at least one of the window bits corresponding to the associated character operation object in a first preset display manner; the skill is displayed in the character operation object The attribute information indicates that the corresponding second character object is currently capable of releasing at least one skill object. Referring to FIG.
  • the skill attribute information is represented by a circular identifier in the upper right corner of the character operation object in the character selection area 802; when the character operation object displays the circular identifier, it indicates that the The second role object associated with the character operation object is currently capable of releasing at least one skill object; when the character operation object does not display the circle identifier, indicating that the second character object associated with the role operation object is currently unable to be released Any skill object.
  • the manner in which the state attribute information in the embodiment of the present invention renders the role operation object associated with the second role object in the corresponding window bit is not limited to that shown in FIG. 8.
  • the functions of the processing units in the terminal of the embodiment of the present invention can be understood by referring to the related description of the information processing method, and the processing units in the information processing terminal according to the embodiments of the present invention can implement the present invention.
  • the implementation of the analog circuit of the functions described in the embodiments can also be implemented by running the software of the functions described in the embodiments of the present invention on the smart terminal.
  • the rendering processing unit 61, the deploying unit 62, the detecting unit 63, and the operation executing unit 64 in the terminal may be used by a central processing unit in the terminal in practical applications.
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • FPGA Field-Programmable Gate Array
  • the embodiment of the invention further provides a terminal.
  • the terminal may be an electronic device such as a PC, and may also be a portable electronic device such as a tablet computer, a laptop computer, a smart phone, etc., and the game system is implemented on the terminal by installing a software application (such as a game application), the terminal.
  • a software application such as a game application
  • At least a memory for storing data and a processor for data processing are included.
  • the processor used for data processing when performing processing, microprocessor, CPU, DSP or FPGA can be used.
  • the implementation of the information processing method in the embodiment of the present invention is implemented by the operation instruction.
  • the terminal includes: a processor 71 and a display 72; the processor 71 is configured to execute a software application and on the display 72. Rendering to obtain a graphical user interface, the processor 71, the graphical user interface and the software application being implemented on a gaming system;
  • the processor 71 is configured to render at least one virtual resource object on the graphical user interface; the at least one role object deployed in the at least one role selection area of the graphical user interface includes at least one window bit;
  • a view acquisition gesture for at least one of the character action objects is detected, a view image captured by the virtual lens associated with the at least one of the character action objects is rendered on the graphical user interface.
  • the processor 71 is configured to generate and send a first instruction when the visual field acquisition gesture of the at least one character operation object in the character object object is detected, where the first instruction is used to invoke the at least one a virtual lens associated with the character operation object and controlling the virtual lens to acquire a visual field image; and during the detection of the visual field acquisition gesture, obtaining a visual field image acquired by the virtual lens.
  • the processor 71 is further configured to: when the view acquisition gesture is terminated, generate a second instruction, and terminate the virtual shot associated with the at least one character operation object based on the second instruction transfer.
  • the server further includes a communication device 74.
  • the processor 71 is further configured to continuously record a change of a state attribute of the user role object in the graphical user interface, and generate a state of the user role object.
  • the attribute information is synchronously updated to the server by the communication device 74.
  • the processor 71 is further configured to obtain state attribute information of the at least one role object associated with the at least one role operation object from the server by using the communication device 74, and press the status attribute information
  • the first preset display mode is rendered in at least one of the window bits corresponding to the associated character operation object.
  • the processor 71 is further configured to continuously record a change of a skill attribute of a user role object in the graphical user interface, and determine that the skill attribute of the user role object reaches a preset condition, and generate a The skill attribute information of the user role object is updated by the communication device 74 to the server.
  • the processor 71 is further configured to obtain, by the communication device 74, skill attribute information of the at least one role object associated with the at least one role operation object from the server, and press the skill attribute information
  • the second preset display mode is rendered at at least one of the window bits corresponding to the associated character operation object.
  • the terminal in this embodiment includes: a processor 71, a display 72, a memory 73, an input device 76, a bus 75, and a communication device 74; the processor 71, the memory 73, the input device 76, the display 72, and the communication device 74 are both Connected via a bus 75 for transferring data between the processor 71, the memory 73, the display 72, and the communication device 74.
  • the input device 76 is mainly configured to obtain an input operation of a user, and the input device 76 may also be different when the terminals are different.
  • the input device 76 may be an input device 76 such as a mouse or a keyboard; when the terminal is a portable device such as a smart phone or a tablet computer, the input device 76 may be a touch device. Screen.
  • a computer storage medium is stored in the memory 73, and the computer storage medium stores computer executable instructions, and the computer executable instructions are used in the information processing method according to the embodiment of the present invention.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative, examples
  • the division of the unit is only a logical function division, and the actual implementation may have another division manner, for example, multiple units or components may be combined, or may be integrated into another system, or some features may be ignored. Or not.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit;
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing storage device includes the following steps: the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or an optical disk.
  • optical disk A medium that can store program code.
  • the above-described integrated unit of the present invention may be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a standalone product.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions. Enabling a computer device (which may be a personal computer, server, or network device, etc.) to perform all of the methods of the various embodiments of the present invention or section.
  • the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disk.
  • the role operation object associated with the second role object belonging to the same group as the user role object is in the corresponding window position by the window bit in the character device object of the role selection area deployed in the graphical user interface.
  • the rendering is performed, so that the user can quickly obtain the visual field image of the corresponding second character object by acquiring the gesture for the view of the character operation object, thereby greatly improving the operation experience of the user in the interaction process.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

一种信息处理方法、终端及计算机存储介质,通过在终端的处理器上执行软件应用并在所述终端的显示器上进行渲染,以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏系统上被实施。所述方法包括:在所述图形用户界面上渲染出至少一个虚拟资源对象(201);部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位(202);检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,在所述图形用户界面上渲染出与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像(203)。

Description

一种信息处理方法、终端及计算机存储介质 技术领域
本发明涉及信息处理技术,具体涉及一种信息处理方法、终端及计算机存储介质。
背景技术
随着互联网技术的飞速发展,以及大屏幕、超大屏幕智能终端的日益普及,智能终端处理器的处理能力也越来越强,从而衍生出很多在大屏幕或超大屏幕上基于人机交互实现操控的应用。基于人机交互实现操控的过程中,多个用户之间可以采集一对一、一对多、多对多等各种建立群组的形式运行不同的交互模式,以得到不同的交互结果。比如,在大屏幕或超大屏幕上渲染得到的图形用户界面(GUI)中,将多个用户分成两个不同群组后,利用人机交互中的操控处理,可以进行不同群组件的信息交互,以及根据对信息交互的响应得到不同的交互结果;利用人机交互中的操控处理,还可以在同一个群组的群成员间进行信息交互,以及根据对信息交互的响应得到不同的交互结果。
现有技术中,大屏幕或超大屏幕上渲染得到的图形用户界面往往只显示用户操作的虚拟角色所在的部分虚拟区域,从而在用户操控时,在图形用户界面中可能不包括与用户属于同一群组的群成员操控的目标对象;在这种情况下,用户想要获得群成员的视野,需要多次操作(如滑动操作)使角色移动,直至角色移动至目标对象附近,从而在当前的图形用户界面中获得群成员所操控的图形用户界面呈现的图像,也即从而获得群成员的视野。这个过程操控时间较长,不能满足信息交互的快捷性的需求。对应这个问题,相关技术中,目前尚无有效解决方案。
发明内容
本发明实施例期望提供一种信息处理方法、终端及计算机存储介质,能够在信息交互过程中快速的获得群成员的视野图像,提升用户的体验。
为达到上述目的,本发明实施例的技术方案是这样实现的:
本发明实施例提供了一种信息处理方法,通过在终端的处理器上执行软件应用并在所述终端的显示器上进行渲染,以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏系统上被实施;所述方法包括:
在所述图形用户界面上渲染出至少一个虚拟资源对象;
部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;
检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,在所述图形用户界面上渲染出与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像。
本发明实施例还提供了一种终端,所述终端包括:渲染处理单元、部署单元、检测单元和操作执行单元;其中,
所述渲染处理单元,配置为执行软件应用并进行渲染得到图形用户界面;在所述图形用户界面上渲染出至少一个虚拟资源对象;还配置为在所述图形用户界面上渲染出所述操作执行单元获得的所述与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像;
所述部署单元,配置为部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;
所述检测单元,配置为检测到对所述角色器对象中至少一个角色操作对象的视野获取手势;
所述操作执行单元,配置为当所述检测单元检测到对所述角色器对象 中至少一个角色操作对象的视野获取手势时,获得与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像。
本发明实施例还提供了一种终端,所述终端包括:处理器和显示器;所述处理器,配置为执行软件应用并在所述显示器上进行渲染以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏系统上被实施;
所述处理器,配置为在所述图形用户界面上渲染出至少一个虚拟资源对象;部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;
以及,检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,在所述图形用户界面上渲染出与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像。
本发明实施例还提供了一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行本发明实施例所述的信息处理方法。
本发明实施例的信息处理方法、终端及计算机存储介质,通过部署在图形用户界面中的角色选择区域的角色器对象中的窗口位,对与用户角色对象属于同一群组的第二角色对象相关联的角色操作对象在对应的窗口位中进行渲染,便于用户能够通过针对所述角色操作对象的视野获取手势快速获得对应的第二角色对象的视野图像,大大提升了用户在交互过程中的操作体验。
附图说明
图1为本发明实施例的信息处理方法进行信息交互的应用架构示意图;
图2为本发明实施例一的信息处理方法的流程示意图;
图3为本发明实施例的信息处理方法的图形用户界面的第一种示意图;
图4为本发明实施例二的信息处理方法的流程示意图;
图5为本发明实施例三的信息处理方法的流程示意图;
图6为本发明实施例的信息处理方法的图形用户界面的第二种示意图;
图7为本发明实施例四的信息处理方法的流程示意图;
图8为本发明实施例的信息处理方法的图形用户界面的第三种示意图;
图9为本发明实施例的信息处理方法的交互应用示意图;
图10为本发明实施例的信息处理方法中的图形用户界面的第四种示意图;
图11为本发明实施例五的终端的组成结构示意图;
图12为本发明实施例六的终端的组成结构示意图;
图13为本发明实施例七的终端的组成结构示意图。
具体实施方式
下面结合附图及具体实施例对本发明作进一步详细的说明。
图1为本发明实施例的信息处理方法进行信息交互的应用架构示意图;如图1所示,所述应用架构包括:服务器101以及至少一个终端,在本应用架构示意中,所述终端包括:终端102、终端103、终端104、终端105和终端106,其中,所述至少一个终端可通过网络100(如有线网络或者无线网络)与所述服务器101建立连接。具体的,所述终端包括手机、台式机、PC机、一体机等类型。
本实施例中,所述终端的处理器能够执行软件应用并在所述终端的显示器上进行渲染以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏系统上被实施。在本实施例中,在所述处理器、图形用户界面和所述软件应用在游戏系统上被实施的过程中,所述至少一个终端可通过有线网络或者无线网络与所述服务器101进行信息交互,以实现所述游戏系统中的1对1或多对多(例如3对3、5对5)的应用模式场景。其 中,所述1对1的应用场景可以为一终端渲染得到的图形用户对象中的虚拟资源对象与所述游戏系统中预先设置的虚拟资源对象的信息交互(可以理解为人机对战),即所述终端与所述服务器进行的信息交互;所述1对1的应用场景还可以为一终端渲染得到的图形用户对象中的虚拟资源对象与另一终端渲染得到的图形用户对象中的虚拟资源对象的信息交互,例如终端102渲染得到的图形用户对象中的虚拟资源对象与终端103渲染得到的图形用户对象中的虚拟资源对象的信息交互。所述多对多的应用模式场景,以3对3的应用模式场景为例,终端1、终端2和终端3分别渲染得到的图形用户对象中的虚拟资源对象组成第一群组,终端4、终端5和终端6分别渲染得到的图形用户对象中的虚拟资源对象组成第二群组,所述第一群组的群成员和所述第二群组的群成员之间进行的信息交互。
上述图1的例子只是实现本发明实施例的一个应用架构实例,本发明实施例并不限于上述图1所述的应用结构,基于该应用架构,提出本发明各个实施例。
实施例一
本发明实施例提供了一种信息处理方法。图2为本发明实施例一的信息处理方法的流程示意图。所述信息处理方法应用于终端中,通过在终端的处理器上执行软件应用并在所述终端的显示器上进行渲染,以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏系统上被实施;如图2所示,所述方法包括:
步骤201:在所述图形用户界面上渲染出至少一个虚拟资源对象。所述虚拟资源对象中的至少一个被配置为根据输入的第一用户命令而执行第一虚拟操作的用户角色对象。
步骤202:部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位。
步骤203:检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,在所述图形用户界面上渲染出与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像。
本实施例中,所述图形用户界面包括至少一角色选择区域,所述角色选择区域包括至少一个角色器对象,所述角色器对象中包括至少一个窗口位,其中至少部分窗口位承载对应的角色操作对象,所述角色操作对象在所述图形用户界面中可通过所述角色操作对象相关联的角色对象的标识(所述标识可以为头像)表示;这里,所述角色操作对象相关联的角色对象与用户角色对象属于同一群组。其中,所述角色器对象在所述角色选择区域中的渲染方式包括但不限于条状、环状,即所述述角色器对象可通过角色选择条对象或角色选择盘对象表征。
图3为本发明实施例的信息处理方法的图形用户界面的第一种示意图;如图3所示,在所述终端的显示器上渲染得到的图形用户界面800中包括至少一个虚拟资源对象;其中,所述虚拟资源对象中包括至少一个用户角色对象a10,所述终端的使用者可通过所述图形用户界面进行信息交互,即输入用户命令;所述用户角色对象a10能够基于所述终端检测到的第一用户命令进行第一虚拟操作;所述第一虚拟操作包括但不限于:移动操作、物理攻击操作、技能攻击操作等等。可以理解为,所述用户角色对象a10为所述终端的使用者操控的角色对象;在游戏系统中,所述用户角色对象a10能够基于所述使用者的操作在所述图形用户界面中执行相应的动作。作为一种实施方式,所述图形用户界面800中还包括至少一个技能对象803,用户可通过技能释放操作控制所述用户角色对象a10执行相应的技能释放操作。
在图3所示的示意中,所述图形用户界面具有一角色选择区域802;所述角色选择区域802中部署有角色器对象,在本示意中,所述角色器对象 通过角色选择条对象表征(即所述角色器对象呈现条状显示效果)。所述角色器对象包括至少一个窗口位,与所述用户角色对象属于同一群组的第二角色对象相关联的角色操作对象在对应的窗口位中进行渲染;以所述角色操作对象通过头像表征为例,即所述角色选择区域802中包括至少一个头像;所述至少一个头像分别与所述用户角色对象属于同一群组的至少一个第二角色对象一一对应。如图3所示,本示意中为5对5的应用场景,与所述用户角色对象a10属于同一群组的角色对象包括四个,在所述角色选择区域802中相应的包括四个角色操作对象,如图3中所示的角色操作对象a11、角色操作对象a12、角色操作对象a13和角色操作对象a14;可以理解为,所述角色选择区域802中的四个角色操作对象与所述用户角色对象属于同一群组的四个第二角色对象一一对应。本实施例可应用于包含有至少两个群成员的多人对战的应用场景。
作为一种实施方式,所述角色选择区域802中的至少两个角色操作对象的相互位置关系依据所述至少两个角色操作对象进入游戏系统的时间先后顺序确定。如图3所示,角色操作对象a11相关联的角色对象进入游戏系统的时间早于角色操作对象a12相关联的角色对象,角色操作对象a13和角色操作对象a14以此类推,这里不再赘述。
本实施例中,检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,在所述图形用户界面上渲染出与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像;其中,所述视野获取手势可以为长按操作手势、双击操作手势等等,不限于上述操作手势。
这里,所述检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,所述方法还包括:生成并发送第一指令,所述第一指令用于调用所述至少一个角色操作对象相关联的虚拟镜头以及控制所述虚拟镜头采集视野图像;以及在所述视野获取手势的检测过程中,获得所述虚拟镜 头采集到的视野图像。
具体的,参照图3所示,以所述视野获取手势为长按手势为例,当检测到针对所述角色选择区域802中的一角色操作对象(如图3中所示的角色操作对象a11)的长按手势时,所述终端生成第一指令,基于所述第一指令建立与所述角色操作对象相关联的角色对象对应的另一终端的网络链接,并基于所述网络链接向所述角色操作对象相关联的角色对象对应的另一终端发送所述第一指令,以控制所述另一终端基于所述第一指令调用所述另一终端的虚拟镜头,通过所述虚拟镜头采集视野图像;在持续检测到针对所述角色操作对象a11的长按手势的过程中,所述终端实时获得所述另一终端发送的视野图像,并在所述图形用户界面上渲染出所述视野图像;如图3中所示的视野图像显示区域801以及所述视野图像显示区域801的放大图801a中所示,所述角色操作对象a11对应的视野图像在所述视野图像显示区域801中显示;所述视野图像为所述角色操作对象a11相关联的角色对象的操控用户能够浏览到的图像;例如,所述角色操作对象a11相关联的角色对象c11当前正在朝向另一角色对象b11进行一技能对象的释放操作,则所述图形用户界面800的视野图像显示区域801显示包含有所述角色操作对象a11相关联的角色对象c11当前正在朝向另一角色对象b11进行一技能对象的释放操作的视野图像,可参见图3所示。可以理解为,通过所述视野获取手势(如长按手势),所述终端能够快速切换到对应的另一终端的视野图像,便于所述终端的使用者能够快速获得队友的视野图像。
采用本发明实施例的技术方案,通过部署在图形用户界面中的角色选择区域的角色器对象中的窗口位,对与用户角色对象属于同一群组的第二角色对象相关联的角色操作对象在对应的窗口位中进行渲染,便于用户能够通过针对所述角色操作对象的视野获取手势快速获得对应的第二角色对象的视野图像,大大提升了用户在交互过程中的操作体验。
实施例二
本发明实施例提供了一种信息处理方法。图4为本发明实施例二的信息处理方法的流程示意图。所述信息处理方法应用于终端中,通过在终端的处理器上执行软件应用并在所述终端的显示器上进行渲染,以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏系统上被实施;如图4所示,
步骤301:在所述图形用户界面上渲染出至少一个虚拟资源对象。
步骤302:部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位。
本实施例中,所述图形用户界面包括至少一角色选择区域,所述角色选择区域包括至少一个角色器对象,所述角色器对象中包括至少一个窗口位,其中至少部分窗口位承载对应的角色操作对象,所述角色操作对象在所述图形用户界面中可通过所述角色操作对象相关联的角色对象的标识(所述标识可以为头像)表示;这里,所述角色操作对象相关联的角色对象与用户角色对象属于同一群组。其中,所述角色器对象在所述角色选择区域中的渲染方式包括但不限于条状、环状,即所述述角色器对象可通过角色选择条对象或角色选择盘对象表征。
具体的,参照图3所示,在所述终端的显示器上渲染得到的图形用户界面800中包括至少一个虚拟资源对象;其中,所述虚拟资源对象中包括至少一个用户角色对象a10,所述终端的使用者可通过所述图形用户界面进行信息交互,即输入用户命令;所述用户角色对象a10能够基于所述终端检测到的第一用户命令进行第一虚拟操作;所述第一虚拟操作包括但不限于:移动操作、物理攻击操作、技能攻击操作等等。可以理解为,所述用户角色对象a10为所述终端的使用者操控的角色对象;在游戏系统中,所述用户角色对象a10能够基于所述使用者的操作在所述图形用户界面中执 行相应的动作。作为一种实施方式,所述图形用户界面800中还包括至少一个技能对象803,用户可通过技能释放操作控制所述用户角色对象a10执行相应的技能释放操作。
在图3所示的示意中,所述图形用户界面具有一角色选择区域802;所述角色选择区域802中部署有角色器对象,在本示意中,所述角色器对象通过角色选择条对象表征(即所述角色器对象呈现条状显示效果)。所述角色器对象包括至少一个窗口位,与所述用户角色对象属于同一群组的第二角色对象相关联的角色操作对象在对应的窗口位中进行渲染;以所述角色操作对象通过头像表征为例,即所述角色选择区域802中包括至少一个头像;所述至少一个头像分别与所述用户角色对象属于同一群组的至少一个第二角色对象一一对应。如图3所示,本示意中为5对5的应用场景,与所述用户角色对象a10属于同一群组的角色对象包括四个,在所述角色选择区域802中相应的包括四个角色操作对象,如图3中所示的角色操作对象a11、角色操作对象a12、角色操作对象a13和角色操作对象a14;可以理解为,所述角色选择区域802中的四个角色操作对象与所述用户角色对象属于同一群组的四个第二角色对象一一对应。本实施例可应用于包含有至少两个群成员的多人对战的应用场景。
作为一种实施方式,所述角色选择区域802中的至少两个角色操作对象的相互位置关系依据所述至少两个角色操作对象进入游戏系统的时间先后顺序确定。如图3所示,角色操作对象a11相关联的角色对象进入游戏系统的时间早于角色操作对象a12相关联的角色对象,角色操作对象a13和角色操作对象a14以此类推,这里不再赘述。
步骤303:检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,生成并发送第一指令,以及在所述视野获取手势的检测过程中,获得所述虚拟镜头采集到的视野图像;所述第一指令用于调用所述至少一 个角色操作对象相关联的虚拟镜头以及控制所述虚拟镜头采集视野图像在所述图形用户界面上渲染出与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像。
具体的,参照图3所示,以所述视野获取手势为长按手势为例,当检测到针对所述角色选择区域802中的一角色操作对象(如图3中所示的角色操作对象a11)的长按手势时,所述终端生成第一指令,基于所述第一指令建立与所述角色操作对象相关联的角色对象对应的另一终端的网络链接,并基于所述网络链接向所述角色操作对象相关联的角色对象对应的另一终端发送所述第一指令,以控制所述另一终端基于所述第一指令调用所述另一终端的虚拟镜头,通过所述虚拟镜头采集视野图像;在持续检测到针对所述角色操作对象a11的长按手势的过程中,所述终端实时获得所述另一终端发送的视野图像,并在所述图形用户界面上渲染出所述视野图像;如图3中所示的视野图像显示区域801以及所述视野图像显示区域801的放大图801a中所示,所述角色操作对象a11对应的视野图像在所述视野图像显示区域801中显示;所述视野图像为所述角色操作对象a11相关联的角色对象的操控用户能够浏览到的图像;例如,所述角色操作对象a11相关联的角色对象c11当前正在朝向另一角色对象b11进行一技能对象的释放操作,则所述图形用户界面800的视野图像显示区域801显示包含有所述角色操作对象a11相关联的角色对象c11当前正在朝向另一角色对象b11进行一技能对象的释放操作的视野图像,可参见图3所示。可以理解为,通过所述视野获取手势(如长按手势),所述终端能够快速切换到对应的另一终端的视野图像,便于所述终端的使用者能够快速获得队友的视野图像。
步骤304:当所述视野获取手势终止时,生成第二指令,基于所述第二指令终止对所述至少一个角色操作对象相关联的虚拟镜头的调用。
具体的,以所述视野获取手势为长按手势为例,当所述长按手势终止 时,生成第二指令,基于所述第二指令终止对所述至少一个角色操作对象相关联的虚拟镜头的调用,以及终止所述终端与所述另一终端的网络链接。
采用本发明实施例的技术方案,通过部署在图形用户界面中的角色选择区域的角色器对象中的窗口位,对与用户角色对象属于同一群组的第二角色对象相关联的角色操作对象在对应的窗口位中进行渲染,便于用户能够通过针对所述角色操作对象的视野获取手势快速获得对应的第二角色对象的视野图像,大大提升了用户在交互过程中的操作体验。
实施例三
本发明实施例提供了一种信息处理方法。图5为本发明实施例三的信息处理方法的流程示意图。所述信息处理方法应用于终端中,通过在终端的处理器上执行软件应用并在所述终端的显示器上进行渲染,以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏系统上被实施;如图5所示,
步骤401:在所述图形用户界面上渲染出至少一个虚拟资源对象。
步骤402:部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位。
本实施例中,所述图形用户界面包括至少一角色选择区域,所述角色选择区域包括至少一个角色器对象,所述角色器对象中包括至少一个窗口位,其中至少部分窗口位承载对应的角色操作对象,所述角色操作对象在所述图形用户界面中可通过所述角色操作对象相关联的角色对象的标识(所述标识可以为头像)表示;这里,所述角色操作对象相关联的角色对象与用户角色对象属于同一群组。其中,所述角色器对象在所述角色选择区域中的渲染方式包括但不限于条状、环状,即所述述角色器对象可通过角色选择条对象或角色选择盘对象表征。
具体的,参照图3所示,在所述终端的显示器上渲染得到的图形用户 界面800中包括至少一个虚拟资源对象;其中,所述虚拟资源对象中包括至少一个用户角色对象a10,所述终端的使用者可通过所述图形用户界面进行信息交互,即输入用户命令;所述用户角色对象a10能够基于所述终端检测到的第一用户命令进行第一虚拟操作;所述第一虚拟操作包括但不限于:移动操作、物理攻击操作、技能攻击操作等等。可以理解为,所述用户角色对象a10为所述终端的使用者操控的角色对象;在游戏系统中,所述用户角色对象a10能够基于所述使用者的操作在所述图形用户界面中执行相应的动作。作为一种实施方式,所述图形用户界面800中还包括至少一个技能对象803,用户可通过技能释放操作控制所述用户角色对象a10执行相应的技能释放操作。
在图3所示的示意中,所述图形用户界面具有一角色选择区域802;所述角色选择区域802中部署有角色器对象,在本示意中,所述角色器对象通过角色选择条对象表征(即所述角色器对象呈现条状显示效果)。所述角色器对象包括至少一个窗口位,与所述用户角色对象属于同一群组的第二角色对象相关联的角色操作对象在对应的窗口位中进行渲染;以所述角色操作对象通过头像表征为例,即所述角色选择区域802中包括至少一个头像;所述至少一个头像分别与所述用户角色对象属于同一群组的至少一个第二角色对象一一对应。如图3所示,本示意中为5对5的应用场景,与所述用户角色对象a10属于同一群组的角色对象包括四个,在所述角色选择区域802中相应的包括四个角色操作对象,如图3中所示的角色操作对象a11、角色操作对象a12、角色操作对象a13和角色操作对象a14;可以理解为,所述角色选择区域802中的四个角色操作对象与所述用户角色对象属于同一群组的四个第二角色对象一一对应。本实施例可应用于包含有至少两个群成员的多人对战的应用场景。
作为一种实施方式,所述角色选择区域802中的至少两个角色操作对 象的相互位置关系依据所述至少两个角色操作对象进入游戏系统的时间先后顺序确定。如图3所示,角色操作对象a11相关联的角色对象进入游戏系统的时间早于角色操作对象a12相关联的角色对象,角色操作对象a13和角色操作对象a14以此类推,这里不再赘述。
步骤403:检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,在所述图形用户界面上渲染出与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像。
这里,所述检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,所述方法还包括:生成并发送第一指令,所述第一指令用于调用所述至少一个角色操作对象相关联的虚拟镜头以及控制所述虚拟镜头采集视野图像;以及在所述视野获取手势的检测过程中,获得所述虚拟镜头采集到的视野图像。
具体的,参照图3所示,以所述视野获取手势为长按手势为例,当检测到针对所述角色选择区域802中的一角色操作对象(如图3中所示的角色操作对象a11)的长按手势时,所述终端生成第一指令,基于所述第一指令建立与所述角色操作对象相关联的角色对象对应的另一终端的网络链接,并基于所述网络链接向所述角色操作对象相关联的角色对象对应的另一终端发送所述第一指令,以控制所述另一终端基于所述第一指令调用所述另一终端的虚拟镜头,通过所述虚拟镜头采集视野图像;在持续检测到针对所述角色操作对象a11的长按手势的过程中,所述终端实时获得所述另一终端发送的视野图像,并在所述图形用户界面上渲染出所述视野图像;如图3中所示的视野图像显示区域801以及所述视野图像显示区域801的放大图801a中所示,所述角色操作对象a11对应的视野图像在所述视野图像显示区域801中显示;所述视野图像为所述角色操作对象a11相关联的角色对象的操控用户能够浏览到的图像;例如,所述角色操作对象a11相 关联的角色对象c11当前正在朝向另一角色对象b11进行一技能对象的释放操作,则所述图形用户界面800的视野图像显示区域801显示包含有所述角色操作对象a11相关联的角色对象c11当前正在朝向另一角色对象b11进行一技能对象的释放操作的视野图像,可参见图3所示。可以理解为,通过所述视野获取手势(如长按手势),所述终端能够快速切换到对应的另一终端的视野图像,便于所述终端的使用者能够快速获得队友的视野图像。
作为一种实施方式,当所述视野获取手势终止时,生成第二指令,基于所述第二指令终止对所述至少一个角色操作对象相关联的虚拟镜头的调用。
具体的,以所述视野获取手势为长按手势为例,当所述长按手势终止时,生成第二指令,基于所述第二指令终止对所述至少一个角色操作对象相关联的虚拟镜头的调用,以及终止所述终端与所述另一终端的网络链接。
步骤404:连续记录所述图形用户界面中的用户角色对象的状态属性的改变,生成所述用户角色对象的状态属性信息,将所述状态属性信息同步更新到服务器。
步骤405:从所述服务器中获得所述至少一个角色操作对象相关联的至少一个角色对象的状态属性信息,将所述状态属性信息按第一预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位中进行渲染。
本实施例中,所述终端连续记录所述图形用户界面中的用户角色对象的状态属性的改变,也即在所述用户角色对象与其他角色对象进行信息交互的过程中,所述终端实时记录所述用户角色对象的状态属性的改变,从而获得所述用户角色对象的状态属性信息;所述状态属性信息包括但不限于所述用户角色对象的血量值、生命值或技能属性信息。所述终端实时将获取的所述用户角色对象的状态属性信息同步至服务器。相应的,对于与所述用户角色对象属于同一群组的至少一个第二角色对象,所述第二角色 对象对应的终端也实时获取所述第二角色对象的状态属性信息至所述服务器。
进一步地,所述终端从所述服务器中获得所述其他终端同步的所述至少一个第二角色对象的状态属性信息,即获得所述图形用户界面中的角色器对象中的至少一个角色操作对象相关联的至少一个角色对象的状态属性信息,可以理解为,所述终端获得与所述用户角色对象属于同一群组的第二角色对象的状态属性信息;将所述第二角色对象的状态属性信息按第一预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位中进行渲染。图6为本发明实施例的信息处理方法的图形用户界面的第二种示意图;如图6所示,以所述状态属性信息为血量值为例,在所述角色选择区域802中的角色操作对象a21的外环区域作为血槽显示区域a211,通过所述血槽显示区域中血量在所述血槽显示区域中的比例关系表征对应的第二角色对象当前的血量值。当然,本发明实施例中所述状态属性信息将所述第二角色对象相关联的角色操作对象在对应的所述窗口位中进行渲染的方式不限于图6中所示。
采用本发明实施例的技术方案,一方面,通过部署在图形用户界面中的角色选择区域的角色器对象中的窗口位,对与用户角色对象属于同一群组的第二角色对象相关联的角色操作对象在对应的窗口位中进行渲染,便于用户能够通过针对所述角色操作对象的视野获取手势快速获得对应的第二角色对象的视野图像,大大提升了用户在交互过程中的操作体验。另一方面,通过同步属于同一群组的第二角色对象(即队友)的状态属性信息的方式获得所述角色器对象中的角色操作对象相关联的第二角色对象的状态属性信息,并将所述状态属性信息通过特定方式渲染在对应的窗口位,也即将第二角色对象(即队友)的状态属性信息反映在对应的角色操作对象(UI头像)上,便于用户能够快速获知第二角色对象(即队友)的状态 属性信息,在信息交互过程中提升了用户的操作体验。
实施例四
本发明实施例提供了一种信息处理方法。图7为本发明实施例四的信息处理方法的流程示意图。所述信息处理方法应用于终端中,通过在终端的处理器上执行软件应用并在所述终端的显示器上进行渲染,以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏系统上被实施;如图7所示,
步骤501:在所述图形用户界面上渲染出至少一个虚拟资源对象。
步骤502:部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位。
本实施例中,所述图形用户界面包括至少一角色选择区域,所述角色选择区域包括至少一个角色器对象,所述角色器对象中包括至少一个窗口位,其中至少部分窗口位承载对应的角色操作对象,所述角色操作对象在所述图形用户界面中可通过所述角色操作对象相关联的角色对象的标识(所述标识可以为头像)表示;这里,所述角色操作对象相关联的角色对象与用户角色对象属于同一群组。其中,所述角色器对象在所述角色选择区域中的渲染方式包括但不限于条状、环状,即所述述角色器对象可通过角色选择条对象或角色选择盘对象表征。
具体的,参照图3所示,在所述终端的显示器上渲染得到的图形用户界面800中包括至少一个虚拟资源对象;其中,所述虚拟资源对象中包括至少一个用户角色对象a10,所述终端的使用者可通过所述图形用户界面进行信息交互,即输入用户命令;所述用户角色对象a10能够基于所述终端检测到的第一用户命令进行第一虚拟操作;所述第一虚拟操作包括但不限于:移动操作、物理攻击操作、技能攻击操作等等。可以理解为,所述用户角色对象a10为所述终端的使用者操控的角色对象;在游戏系统中,所 述用户角色对象a10能够基于所述使用者的操作在所述图形用户界面中执行相应的动作。作为一种实施方式,所述图形用户界面800中还包括至少一个技能对象803,用户可通过技能释放操作控制所述用户角色对象a10执行相应的技能释放操作。
在图3所示的示意中,所述图形用户界面具有一角色选择区域802;所述角色选择区域802中部署有角色器对象,在本示意中,所述角色器对象通过角色选择条对象表征(即所述角色器对象呈现条状显示效果)。所述角色器对象包括至少一个窗口位,与所述用户角色对象属于同一群组的第二角色对象相关联的角色操作对象在对应的窗口位中进行渲染;以所述角色操作对象通过头像表征为例,即所述角色选择区域802中包括至少一个头像;所述至少一个头像分别与所述用户角色对象属于同一群组的至少一个第二角色对象一一对应。如图3所示,本示意中为5对5的应用场景,与所述用户角色对象a10属于同一群组的角色对象包括四个,在所述角色选择区域802中相应的包括四个角色操作对象,如图3中所示的角色操作对象a11、角色操作对象a12、角色操作对象a13和角色操作对象a14;可以理解为,所述角色选择区域802中的四个角色操作对象与所述用户角色对象属于同一群组的四个第二角色对象一一对应。本实施例可应用于包含有至少两个群成员的多人对战的应用场景。
作为一种实施方式,所述角色选择区域802中的至少两个角色操作对象的相互位置关系依据所述至少两个角色操作对象进入游戏系统的时间先后顺序确定。如图3所示,角色操作对象a11相关联的角色对象进入游戏系统的时间早于角色操作对象a12相关联的角色对象,角色操作对象a13和角色操作对象a14以此类推,这里不再赘述。
步骤503:检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,在所述图形用户界面上渲染出与至少一个所述角色操作对象相 关联的虚拟镜头所捕获的视野图像。
这里,所述检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,所述方法还包括:生成并发送第一指令,所述第一指令用于调用所述至少一个角色操作对象相关联的虚拟镜头以及控制所述虚拟镜头采集视野图像;以及在所述视野获取手势的检测过程中,获得所述虚拟镜头采集到的视野图像。
具体的,参照图3所示,以所述视野获取手势为长按手势为例,当检测到针对所述角色选择区域802中的一角色操作对象(如图3中所示的角色操作对象a11)的长按手势时,所述终端生成第一指令,基于所述第一指令建立与所述角色操作对象相关联的角色对象对应的另一终端的网络链接,并基于所述网络链接向所述角色操作对象相关联的角色对象对应的另一终端发送所述第一指令,以控制所述另一终端基于所述第一指令调用所述另一终端的虚拟镜头,通过所述虚拟镜头采集视野图像;在持续检测到针对所述角色操作对象a11的长按手势的过程中,所述终端实时获得所述另一终端发送的视野图像,并在所述图形用户界面上渲染出所述视野图像;如图3中所示的视野图像显示区域801以及所述视野图像显示区域801的放大图801a中所示,所述角色操作对象a11对应的视野图像在所述视野图像显示区域801中显示;所述视野图像为所述角色操作对象a11相关联的角色对象的操控用户能够浏览到的图像;例如,所述角色操作对象a11相关联的角色对象c11当前正在朝向另一角色对象b11进行一技能对象的释放操作,则所述图形用户界面800的视野图像显示区域801显示包含有所述角色操作对象a11相关联的角色对象c11当前正在朝向另一角色对象b11进行一技能对象的释放操作的视野图像,可参见图3所示。可以理解为,通过所述视野获取手势(如长按手势),所述终端能够快速切换到对应的另一终端的视野图像,便于所述终端的使用者能够快速获得队友的视野图像。
作为一种实施方式,当所述视野获取手势终止时,生成第二指令,基于所述第二指令终止对所述至少一个角色操作对象相关联的虚拟镜头的调用。
具体的,以所述视野获取手势为长按手势为例,当所述长按手势终止时,生成第二指令,基于所述第二指令终止对所述至少一个角色操作对象相关联的虚拟镜头的调用,以及终止所述终端与所述另一终端的网络链接。
步骤504:连续记录所述图形用户界面中的用户角色对象的状态属性的改变,生成所述用户角色对象的状态属性信息;以及连续记录所述图形用户界面中的用户角色对象的技能属性的改变,确定所述用户角色对象的技能属性达到预设条件时,生成所述用户角色对象的技能属性信息;将所述状态属性信息和所述技能属性信息同步更新到服务器。
本实施例中,一方面,所述终端连续记录所述图形用户界面中的用户角色对象的状态属性的改变,也即在所述用户角色对象与其他角色对象进行信息交互的过程中,所述终端实时记录所述用户角色对象的状态属性的改变,从而获得所述用户角色对象的状态属性信息;所述状态属性信息包括但不限于所述用户角色对象的血量值、生命值或技能属性信息。所述终端实时将获取的所述用户角色对象的状态属性信息同步至服务器。相应的,对于与所述用户角色对象属于同一群组的至少一个第二角色对象,所述第二角色对象对应的终端也实时获取所述第二角色对象的状态属性信息至所述服务器。
另一方面,所述终端连续记录所述图形用户界面中的用户角色对象的技能属性的改变,也即在所述用户角色对象与其他角色对象进行信息交互的过程中,所述终端实时记录所述用户角色对象的技能属性的改变;由于用户角色对象在释放一技能对象后,需要一段时间后,所述技能对象才能够恢复,也即所述一段时间后,所述技能对象才能被再一次的释放;则本 实施例中,所述终端实时记录所述用户角色对象的技能属性的改变,确定至少一技能对象能够被释放时,确定所述用户角色对象的技能属性达到预设条件,生成所述用户角色对象的技能属性信息,所述技能属性信息表征所述用户角色对象能够释放至少一技能对象。所述终端实时将获取的所述用户角色对象的技能属性信息同步至服务器。相应的,对于与所述用户角色对象属于同一群组的至少一个第二角色对象,所述第二角色对象对应的终端也实时获取所述第二角色对象的技能属性信息至所述服务器。
步骤505:从所述服务器中获得所述至少一个角色操作对象相关联的至少一个角色对象的状态属性信息和技能属性信息,将所述状态属性信息按第一预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位中进行渲染,以及将所述技能属性信息按第二预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位进行渲染。
这里,一方面,所述终端从所述服务器中获得所述其他终端同步的所述至少一个第二角色对象的状态属性信息,即获得所述图形用户界面中的角色器对象中的至少一个角色操作对象相关联的至少一个角色对象的状态属性信息,可以理解为,所述终端获得与所述用户角色对象属于同一群组的第二角色对象的状态属性信息;将所述第二角色对象的状态属性信息按第一预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位中进行渲染。图8为本发明实施例的信息处理方法的图形用户界面的第三种示意图;如图8所示,以所述状态属性信息为血量值为例,在所述角色选择区域802中的角色操作对象a31的外环区域作为血槽显示区域a311,通过所述血槽显示区域a311中血量在所述血槽显示区域中的比例关系表征对应的第二角色对象当前的血量值。当然,本发明实施例中所述状态属性信息将所述第二角色对象相关联的角色操作对象在对应的所述窗口位中进行渲染的方式不限于图8中所示。
另一方面,所述终端从所述服务器中获得所述其他终端同步的所述至少一个第二角色对象的技能属性信息,即获得所述图形用户界面中的角色器对象中的至少一个角色操作对象相关联的至少一个角色对象的技能属性信息,可以理解为,所述终端获得与所述用户角色对象属于同一群组的第二角色对象的技能属性信息;将所述第二角色对象的技能属性信息按第一预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位中进行渲染;所述角色操作对象中显示所述技能属性信息表明对应的第二角色对象当前能够释放至少一技能对象。参照图8所示,在所述角色器对象中的角色操作对象a31的右上角通过一圆形标识a312表征所述技能属性信息;当所述角色操作对象显示所述圆形标识a312时,表明所述角色操作对象相关联的第二角色对象当前能够释放至少一技能对象;当所述角色操作对象未显示所述圆形标识时,表明所述角色操作对象相关联的第二角色对象当前不能够释放任何技能对象。当然,本发明实施例中所述状态属性信息将所述第二角色对象相关联的角色操作对象在对应的所述窗口位中进行渲染的方式不限于图8中所示。
采用本发明实施例的技术方案,一方面,通过部署在图形用户界面中的角色选择区域的角色器对象中的窗口位,对与用户角色对象属于同一群组的第二角色对象相关联的角色操作对象在对应的窗口位中进行渲染,便于用户能够通过针对所述角色操作对象的视野获取手势快速获得对应的第二角色对象的视野图像,大大提升了用户在交互过程中的操作体验。另一方面,通过同步属于同一群组的第二角色对象(即队友)的状态属性信息和技能属性信息的方式获得所述角色器对象中的角色操作对象相关联的第二角色对象的状态属性信息和技能属性信息,并将所述状态属性信息和技能属性信息通过特定方式渲染在对应的窗口位,也即将第二角色对象(即队友)的状态属性信息和技能属性信息反映在对应的角色操作对象(UI头 像)上,便于用户能够快速获知第二角色对象(即队友)的状态属性信息和技能属性信息,在信息交互过程中提升了用户的操作体验。
基于实施例一至实施例四的方法实施例,下面以2对2的应用场景为例进行详细说明。其中,所述2对2的应用场景为终端1操控的第一角色对象和终端2操控的第二角色对象属于第一群组与终端3操控的第三角色对象和终端4操控的第四角色对象进行信息交互的应用场景,其余应用场景可参照本应用场景的描述,本实施例中不做赘述。图9为本发明实施例的信息处理方法的交互应用示意图;如图9所示,在本应用场景中,包括终端1、终端2、终端3、终端4和服务器5,所述终端1通过用户1进行触发操控;所述终端2通过用户2进行触发操控;所述终端3通过用户3进行触发操控;所述终端4通过用户4进行触发操控;所述方法包括:
对于用户1,步骤11:用户1通过触发游戏系统并登陆身份验证信息,所述身份验证信息可以为用户名和密码。
步骤12:所述终端1将获取到的身份验证信息传输至服务器5,由所述服务器5进行身份验证,并在身份验证通过后,返回第一图形用户界面至所述终端1;其中,所述第一图形用户界面中包括第一角色对象,所述第一角色对象能够基于用户1的触发操作执行虚拟操作,所述虚拟操作包括所述第一角色对象的移动操作、所述第一角色对象针对其他角色对象的攻击操作或技能释放操作等等。
对于用户2,步骤21:用户2通过触发游戏系统并登陆身份验证信息,所述身份验证信息可以为用户名和密码。
步骤22:所述终端2将获取到的身份验证信息传输至服务器5,由所述服务器5进行身份验证,并在身份验证通过后,返回第二图形用户界面至所述终端2;其中,所述第二图形用户界面中包括第二角色对象,所述第二角色对象能够基于用户2的触发操作执行虚拟操作,所述虚拟操作包括 所述第二角色对象的移动操作、所述第二角色对象针对其他角色对象的攻击操作或技能释放操作等等。
本实施例中,所述终端1中渲染的第一角色对象和所述终端2中渲染的第二角色对象属于同一群组,则所述终端1中的第一图形用户界面的角色器对象的窗口位包括与所述第二角色对象相关联的角色操作对象;所述角色操作对象在被视野获取手势(如长按手势)操作时,所述终端1能够调用所述终端2的虚拟镜头,从而可以通过所述虚拟镜头获得所述终端2的视野图像,并在所述视野获取手势(如长按手势)持续操作时,所述终端1显示所述视野图像。相应的,所述终端2中的第二图形用户界面的角色器对象的窗口位包括与所述第一角色对象相关联的角色操作对象,与所述终端1同理,在所述终端2的角色操作对象在被视野获取手势(如长按手势)操作时,所述终端2能够调用所述终端1的虚拟镜头,从而可以通过所述虚拟镜头获得所述终端1的视野图像,这里不再赘述。
对于用户3,步骤31:用户3通过触发游戏系统并登陆身份验证信息,所述身份验证信息可以为用户名和密码。
步骤32:所述终端3将获取到的身份验证信息传输至服务器5,由所述服务器5进行身份验证,并在身份验证通过后,返回第三图形用户界面至所述终端3;其中,所述第三图形用户界面中包括第三角色对象,所述第三角色对象能够基于用户3的触发操作执行虚拟操作,所述虚拟操作包括所述第三角色对象的移动操作、所述第三角色对象针对其他角色对象的攻击操作或技能释放操作等等。
对于用户4,步骤41:用户4通过触发游戏系统并登陆身份验证信息,所述身份验证信息可以为用户名和密码。
步骤42:所述终端4将获取到的身份验证信息传输至服务器5,由所述服务器5进行身份验证,并在身份验证通过后,返回第四图形用户界面 至所述终端5;其中,所述第四图形用户界面中包括第四角色对象,所述第四角色对象能够基于用户5的触发操作执行虚拟操作,所述虚拟操作包括所述第四角色对象的移动操作、所述第四角色对象针对其他角色对象的攻击操作或技能释放操作等等。
与所述终端1和所述终端2同理,所述终端3和所述终端4中均渲染出与之属于同一群组的另一角色对象相关联的角色操作对象;当检测到针对所述角色操作对象的视野获取手势(如长按手势)操作时,获得所述角色操作对象相关联的角色对象的视野图像,这里不再赘述。
本实施例中,在所述用户1和所述用户2作为第一群组、所述用户3和用户4作为第二群组,能够基于触发操作,使得所述第一群组中的角色对象和所述第二群组中的角色对象作为信息交互的对象。
至此,完成所述用户1、所述用户2、所述用户3、所述用户4的游戏系统的登录操作以及初始化操作。
对于用户1,步骤13,用户1对所述终端1呈现的第一图形用户界面进行触发操作,所述触发操作可针对所述第一图形用户界面中的任何虚拟资源对象,包括针对任何技能对象的技能释放操作、针对任何角色对象的信息交互操作(可以理解为物理攻击操作)、所述第一角色对象的移动操作等等。在本实施例中,所述触发操作为针对所述第一图形用户界面的角色器对象中的角色操作对象的视野获取手势操作。
步骤14,所述终端1获取到触发操作时,识别所述触发操作手势对应的指令,执行所述指令;例如执行对相应操作对象的技能释放指令、执行针对相应角色对象的信息交互指令(如物理攻击指令)、执行移动指令等等。并且,在执行指令的过程中,记录相应数据的改变。
步骤15,将改变后的数据作为与所述终端1对应的第一数据同步至服务器5。
对于用户2,步骤23,用户2对所述终端2呈现的第二图形用户界面进行触发操作,所述触发操作可针对所述第二图形用户界面中的任何虚拟资源对象,包括针对任何技能对象的技能释放操作、针对任何角色对象的信息交互操作(可以理解为物理攻击操作)、所述第二角色对象的移动操作等等。在本实施例中,所述触发操作为针对所述第二图形用户界面的角色器对象中的角色操作对象的视野获取手势操作。
步骤24,所述终端2获取到触发操作时,识别所述触发操作手势对应的指令,执行所述指令;例如执行对相应操作对象的技能释放指令、执行针对相应角色对象的信息交互指令(如物理攻击指令)、执行移动指令等等。并且,在执行指令的过程中,记录相应数据的改变。
步骤25,将改变后的数据作为与所述终端2对应的第二数据同步至服务器5。
对于用户3,步骤33,用户3对所述终端3呈现的第三图形用户界面进行触发操作,所述触发操作可针对所述第三图形用户界面中的任何虚拟资源对象,包括针对任何技能对象的技能释放操作、针对任何角色对象的信息交互操作(可以理解为物理攻击操作)、所述第三角色对象的移动操作等等。在本实施例中,所述触发操作为针对所述第三图形用户界面的角色器对象中的角色操作对象的视野获取手势操作。
步骤34,所述终端3获取到触发操作时,识别所述触发操作手势对应的指令,执行所述指令;例如执行对相应操作对象的技能释放指令、执行针对相应角色对象的信息交互指令(如物理攻击指令)、执行移动指令等等。并且,在执行指令的过程中,记录相应数据的改变。
步骤35,将改变后的数据作为与所述终端3对应的第三数据同步至服务器5。
对于用户4,步骤43,用户4对所述终端4呈现的第四图形用户界面 进行触发操作,所述触发操作可针对所述第四图形用户界面中的任何虚拟资源对象,包括针对任何技能对象的技能释放操作、针对任何角色对象的信息交互操作(可以理解为物理攻击操作)、所述第四角色对象的移动操作等等。在本实施例中,所述触发操作为针对所述第四图形用户界面的角色器对象中的角色操作对象的视野获取手势操作。
步骤44,所述终端4获取到触发操作时,识别所述触发操作手势对应的指令,执行所述指令;例如执行对相应操作对象的技能释放指令、执行针对相应角色对象的信息交互指令(如物理攻击指令)、执行移动指令等等。并且,在执行指令的过程中,记录相应数据的改变。
步骤45,将改变后的数据作为与所述终端4对应的第四数据同步至服务器5。
对于服务器5,步骤50,基于终端1同步的第一数据、终端2同步的第二数据、终端3同步的第三数据和终端4同步的第四数据进行数据更新,并将更新后的数据分别同步至所述终端1、所述终端2、所述终端3和所述终端4。
参照上述方法实施例的描述,以一个现实应用场景为例对本发明实施例阐述如下:本应用场景涉及多人在线战术竞技游戏(MOBA,Multiplayer Online Battle Arena Games)。在MOBA中涉及的技术名词为:1)UI层:即图形用户界面中的图标;2)技能指示器:用来辅助技能释放的特效、光圈、操作;3)虚拟镜头:可以理解成为游戏里的摄像机;4)小地图:大地图的缩小版,可以理解成为雷达图,地图上会显示敌人我方的信息和位置。
图10为本发明实施例的信息处理方法中的图形用户界面的第四种示意图;本示意基于实际交互过程的应用场景。参照图10所示,本实施例中渲染得到的图形用户界面90中包括角色选择区域92,所述角色选择区域92包括角色器对象;所述角色器对象在本示意中包括四个窗口位,每个窗口 位分别渲染出一角色操作对象,包括角色操作对象921、角色操作对象922、角色操作对象923和角色操作对象924;每个角色操作对象分别与一角色对象相关联;四个角色对象与用户角色对象属于同一群组。在本示意中,所述图形用户界面90中还包括一区域91;在未检测到对所述角色选择区域92中任何角色操作对象的视野获取手势时,所述区域91渲染出敌我双方部署布局的小地图(参见图10中所示);当检测到对所述角色选择区域92中任何角色操作对象(如角色操作对象921)的视野获取手势(如长按手势)时,所述终端通过指令与所述角色操作对象921相关联的角色对象所对应的虚拟镜头,控制所述虚拟镜头采集视野图像并返回至所述终端的图形用户界面90中;所述区域91渲染出相应角色操作对象921相关联的角色对象的视野图像(图10中未示出)。如此,便于用户能够通过针对所述角色操作对象的视野获取手势快速获得对应的第二角色对象的视野图像,大大提升了用户在交互过程中的操作体验。
实施例五
本发明实施例还提供了一种终端。图11为本发明实施例五的终端的组成结构示意图;如图11所示,所述终端包括:渲染处理单元61、部署单元62、检测单元63和操作执行单元64;其中,
所述渲染处理单元61,配置为执行软件应用并进行渲染得到图形用户界面;在所述图形用户界面上渲染出至少一个虚拟资源对象;还配置为在所述图形用户界面上渲染出所述操作执行单元64获得的所述与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像;
所述部署单元62,配置为部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;
所述检测单元63,配置为检测到对所述角色器对象中至少一个角色操作对象的视野获取手势;
所述操作执行单元64,配置为当所述检测单元63检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,获得与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像。
本实施例中,所述图形用户界面包括至少一角色选择区域,所述角色选择区域包括至少一个角色器对象,所述角色器对象中包括至少一个窗口位,其中至少部分窗口位承载对应的角色操作对象,所述角色操作对象在所述图形用户界面中可通过所述角色操作对象相关联的角色对象的标识(所述标识可以为头像)表示;这里,所述角色操作对象相关联的角色对象与用户角色对象属于同一群组。其中,所述角色器对象在所述角色选择区域中的渲染方式包括但不限于条状、环状,即所述述角色器对象可通过角色选择条对象或角色选择盘对象表征。
具体的,参照图3所示,所述渲染处理单元61渲染得到的图形用户界面800中包括至少一个虚拟资源对象;其中,所述虚拟资源对象中包括至少一个用户角色对象a10,所述终端的使用者可通过所述图形用户界面进行信息交互,即输入用户命令;所述用户角色对象a10能够基于所述终端检测到的第一用户命令进行第一虚拟操作;所述第一虚拟操作包括但不限于:移动操作、物理攻击操作、技能攻击操作等等。可以理解为,所述用户角色对象a10为所述终端的使用者操控的角色对象;在游戏系统中,所述用户角色对象a10能够基于所述使用者的操作在所述图形用户界面中执行相应的动作。所述图形用户界面800中还包括至少一个技能对象803,用户可通过技能释放操作控制所述用户角色对象a10执行相应的技能释放操作。
在图3所示的示意中,所述部署单元62在所述图形用户界面部署一角色选择区域802;所述角色选择区域802中部署有角色器对象,在本示意中,所述角色器对象通过角色选择条对象表征(即所述角色器对象呈现条状显示效果)。所述角色器对象包括至少一个窗口位,与所述用户角色对象属于 同一群组的第二角色对象相关联的角色操作对象在对应的窗口位中进行渲染;以所述角色操作对象通过头像表征为例,即所述角色选择区域802中包括至少一个头像;所述至少一个头像分别与所述用户角色对象属于同一群组的至少一个第二角色对象一一对应。如图3所示,本示意中为5对5的应用场景,与所述用户角色对象a10属于同一群组的角色对象包括四个,在所述角色选择区域802中相应的包括四个角色操作对象,如图3中所示的角色操作对象a11、角色操作对象a12、角色操作对象a13和角色操作对象a14;可以理解为,所述角色选择区域802中的四个角色操作对象与所述用户角色对象属于同一群组的四个第二角色对象一一对应。本实施例可应用于包含有至少两个群成员的多人对战的应用场景。
作为一种实施方式,所述角色选择区域802中的至少两个角色操作对象的相互位置关系依据所述至少两个角色操作对象进入游戏系统的时间先后顺序确定。如图3所示,角色操作对象a11相关联的角色对象进入游戏系统的时间早于角色操作对象a12相关联的角色对象,角色操作对象a13和角色操作对象a14以此类推,这里不再赘述。
本实施例中,所述操作执行单元64,配置为所述检测单元63检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,生成并发送第一指令,所述第一指令用于调用所述至少一个角色操作对象相关联的虚拟镜头以及控制所述虚拟镜头采集视野图像;以及所述检测单元63在所述视野获取手势的检测过程中,获得所述虚拟镜头采集到的视野图像。
具体的,参照图3所示,以所述视野获取手势为长按手势为例,当所述检测单元63检测到针对所述角色选择区域802中的一角色操作对象(如图3中所示的角色操作对象a11)的长按手势时,所述操作执行单元64生成第一指令,基于所述第一指令建立与所述角色操作对象相关联的角色对象对应的另一终端的网络链接,并基于所述网络链接向所述角色操作对象 相关联的角色对象对应的另一终端发送所述第一指令,以控制所述另一终端基于所述第一指令调用所述另一终端的虚拟镜头,通过所述虚拟镜头采集视野图像;在所述检测单元63持续检测到针对所述角色操作对象a11的长按手势的过程中,所述操作执行单元64实时获得所述另一终端发送的视野图像,并在所述图形用户界面上渲染出所述视野图像;如图3中所示的视野图像显示区域801以及所述视野图像显示区域801的放大图801a中所示,所述角色操作对象a11对应的视野图像在所述视野图像显示区域801中显示;所述视野图像为所述角色操作对象a11相关联的角色对象的操控用户能够浏览到的图像;例如,所述角色操作对象a11相关联的角色对象c11当前正在朝向另一角色对象b11进行一技能对象的释放操作,则所述图形用户界面800的视野图像显示区域801显示包含有所述角色操作对象a11相关联的角色对象c11当前正在朝向另一角色对象b11进行一技能对象的释放操作的视野图像,可参见图3所示。可以理解为,通过所述视野获取手势(如长按手势),所述终端能够快速切换到对应的另一终端的视野图像,便于所述终端的使用者能够快速获得队友的视野图像。
作为一种实施方式,所述操作执行单元64,还配置为当所述检测单元63检测到所述视野获取手势终止时,生成第二指令,基于所述第二指令终止对所述至少一个角色操作对象相关联的虚拟镜头的调用。
具体的,以所述视野获取手势为长按手势为例,当所述检测单元63检测到所述长按手势终止时,所述操作执行单元64生成第二指令,基于所述第二指令终止对所述至少一个角色操作对象相关联的虚拟镜头的调用,以及终止所述终端与所述另一终端的网络链接。
本领域技术人员应当理解,本发明实施例的终端中各处理单元的功能,可参照前述信息处理方法的相关描述而理解,本发明实施例的信息处理终端中各处理单元,可通过实现本发明实施例所述的功能的模拟电路而实现, 也可以通过执行本发明实施例所述的功能的软件在智能终端上的运行而实现。
实施例六
基于实施例五,本发明实施例还提供了一种终端。图12为本发明实施例六的终端的组成结构示意图;如图12所示,所述终端包括:渲染处理单元61、部署单元62、检测单元63、操作执行单元64和通讯单元65;其中,
所述渲染处理单元61,配置为执行软件应用并进行渲染得到图形用户界面;在所述图形用户界面上渲染出至少一个虚拟资源对象;还配置为在所述图形用户界面上渲染出所述操作执行单元64获得的所述与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像;还配置为将所述操作执行单元64获得的状态属性信息按第一预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位中进行渲染;
所述部署单元62,配置为部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;
所述检测单元63,配置为检测到对所述角色器对象中至少一个角色操作对象的视野获取手势;
所述操作执行单元64,配置为当所述检测单元63检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,获得与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像;还配置为连续记录所述图形用户界面中的用户角色对象的状态属性的改变,生成所述用户角色对象的状态属性信息,将所述状态属性信息通过所述通讯单元65同步更新到服务器;还配置为通过所述通讯单元65从所述服务器中获得所述至少一个角色操作对象相关联的至少一个角色对象的状态属性信息。
本实施例中,所述渲染处理单元61、所述部署单元62、所述检测单元63和所述操作执行单元64的功能可参照实施例五的描述,这里不再赘述。 区别在于,本实施例中,操作执行单元64连续记录所述图形用户界面中的用户角色对象的状态属性的改变,也即在所述用户角色对象与其他角色对象进行信息交互的过程中,所述终端实时记录所述用户角色对象的状态属性的改变,从而获得所述用户角色对象的状态属性信息;所述状态属性信息包括但不限于所述用户角色对象的血量值、生命值或技能属性信息。操作执行单元64实时将获取的所述用户角色对象的状态属性信息通过所述通讯单元65同步至服务器。相应的,对于与所述用户角色对象属于同一群组的至少一个第二角色对象,所述第二角色对象对应的终端也实时获取所述第二角色对象的状态属性信息至所述服务器。
进一步地,所述操作执行单元64通过所述通讯单元65从所述服务器中获得所述其他终端同步的所述至少一个第二角色对象的状态属性信息,即获得所述图形用户界面中的角色器对象中的至少一个角色操作对象相关联的至少一个角色对象的状态属性信息,可以理解为,所述操作执行单元64获得与所述用户角色对象属于同一群组的第二角色对象的状态属性信息;将所述第二角色对象的状态属性信息按第一预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位中进行渲染。参照图6所示,以所述状态属性信息为血量值为例,在所述角色器对象中的角色操作对象的外环区域作为血槽显示区域,通过所述血槽显示区域中血量在所述血槽显示区域中的比例关系表征对应的第二角色对象当前的血量值。当然,本发明实施例中所述状态属性信息将所述第二角色对象相关联的角色操作对象在对应的所述窗口位中进行渲染的方式不限于图6中所示。
作为一种实施方式,所述操作执行单元64,还配置为连续记录所述图形用户界面中的用户角色对象的技能属性的改变,确定所述用户角色对象的技能属性达到预设条件时,生成所述用户角色对象的技能属性信息,将所述技能属性信息通过所述通讯单元65同步更新到服务器;还配置为通过 所述通讯单元65从所述服务器中获得所述至少一个角色操作对象相关联的至少一个角色对象的技能属性信息;
相应的,所述渲染处理单元61,还配置为将所述操作执行单元64获得的所述技能属性信息按第二预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位进行渲染。
具体的,所述操作执行单元64连续记录所述图形用户界面中的用户角色对象的技能属性的改变,也即在所述用户角色对象与其他角色对象进行信息交互的过程中,所述操作执行单元64实时记录所述用户角色对象的技能属性的改变;由于用户角色对象在释放一技能对象后,需要一段时间后,所述技能对象才能够恢复,也即所述一段时间后,所述技能对象才能被再一次的释放;则本实施例中,所述操作执行单元64实时记录所述用户角色对象的技能属性的改变,确定至少一技能对象能够被释放时,确定所述用户角色对象的技能属性达到预设条件,生成所述用户角色对象的技能属性信息,所述技能属性信息表征所述用户角色对象能够释放至少一技能对象。所述操作执行单元64实时将获取的所述用户角色对象的技能属性信息通过所述通讯单元65同步至服务器。相应的,对于与所述用户角色对象属于同一群组的至少一个第二角色对象,所述第二角色对象对应的终端也实时获取所述第二角色对象的技能属性信息至所述服务器。
所述操作执行单元64通过所述通讯单元65从所述服务器中获得所述其他终端同步的所述至少一个第二角色对象的技能属性信息,即获得所述图形用户界面中的角色器对象中的至少一个角色操作对象相关联的至少一个角色对象的技能属性信息,可以理解为,所述操作执行单元64获得与所述用户角色对象属于同一群组的第二角色对象的技能属性信息;将所述第二角色对象的技能属性信息按第一预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位中进行渲染;所述角色操作对象中显示所述技 能属性信息表明对应的第二角色对象当前能够释放至少一技能对象。参照图8所示,在所述角色选择区域802中的角色操作对象的右上角通过一圆形标识表征所述技能属性信息;当所述角色操作对象显示所述圆形标识时,表明所述角色操作对象相关联的第二角色对象当前能够释放至少一技能对象;当所述角色操作对象未显示所述圆形标识时,表明所述角色操作对象相关联的第二角色对象当前不能够释放任何技能对象。当然,本发明实施例中所述状态属性信息将所述第二角色对象相关联的角色操作对象在对应的所述窗口位中进行渲染的方式不限于图8中所示。
本领域技术人员应当理解,本发明实施例的终端中各处理单元的功能,可参照前述信息处理方法的相关描述而理解,本发明实施例的信息处理终端中各处理单元,可通过实现本发明实施例所述的功能的模拟电路而实现,也可以通过执行本发明实施例所述的功能的软件在智能终端上的运行而实现。
本发明实施例五和实施例六中,所述终端中的渲染处理单元61、部署单元62、检测单元63和操作执行单元64,在实际应用中,均可由所述终端中的中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Signal Processor)或可编程门阵列(FPGA,Field-Programmable Gate Array)实现;所述终端中的通讯单元65,在实际应用中,可由所述终端中的收发天线或通讯接口实现。
实施例七
本发明实施例还提供了一种终端。所述终端可以为PC这种电子设备,还可以为如平板电脑,手提电脑、智能手机等便携电子设备,通过安装软件应用(如游戏应用)实现游戏系统在所述终端上执行,所述终端至少包括用于存储数据的存储器和用于数据处理的处理器。其中,对于用于数据处理的处理器而言,在执行处理时,可以采用微处理器、CPU、DSP或FPGA 实现;对于存储器来说,包含操作指令,该操作指令可以为计算机可执行代码,通过所述操作指令来实现上述本发明实施例信息处理方法流程中的各个步骤。
图13为本发明实施例七的终端的组成结构示意图;如图13所示,所述终端包括:处理器71和显示器72;所述处理器71,配置为执行软件应用并在所述显示器72上进行渲染以得到图形用户界面,所述处理器71、图形用户界面和所述软件应用在游戏系统上被实施;
所述处理器71,配置为在所述图形用户界面上渲染出至少一个虚拟资源对象;部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;
以及,检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,在所述图形用户界面上渲染出与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像。
具体的,所述处理器71,配置为检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,生成并发送第一指令,所述第一指令用于调用所述至少一个角色操作对象相关联的虚拟镜头以及控制所述虚拟镜头采集视野图像;以及在所述视野获取手势的检测过程中,获得所述虚拟镜头采集到的视野图像。
作为一种实施方式,所述处理器71,还配置为当所述视野获取手势终止时,生成第二指令,基于所述第二指令终止对所述至少一个角色操作对象相关联的虚拟镜头的调用。
作为一种实施方式,所述服务器还包括通讯设备74;所述处理器71,还配置为连续记录所述图形用户界面中的用户角色对象的状态属性的改变,生成所述用户角色对象的状态属性信息,通过所述通讯设备74将所述状态属性信息同步更新到服务器。
相应的,所述处理器71,还配置为通过所述通讯设备74从所述服务器中获得所述至少一个角色操作对象相关联的至少一个角色对象的状态属性信息,将所述状态属性信息按第一预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位中进行渲染。
作为一种实施方式,所述处理器71,还配置为连续记录所述图形用户界面中的用户角色对象的技能属性的改变,确定所述用户角色对象的技能属性达到预设条件时,生成所述用户角色对象的技能属性信息,通过所述通讯设备74将所述技能属性信息同步更新到服务器。
相应的,所述处理器71,还配置为通过所述通讯设备74从所述服务器中获得所述至少一个角色操作对象相关联的至少一个角色对象的技能属性信息,将所述技能属性信息按第二预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位进行渲染。
本实施例中所述终端包括:处理器71、显示器72、存储器73、输入设备76、总线75和通讯设备74;所述处理器71、存储器73、输入设备76、显示器72和通讯设备74均通过总线75连接,所述总线75用于所述处理器71、存储器73、显示器72和通讯设备74之间传输数据。
其中,所述输入设备76主要配置为获得用户的输入操作,当所述终端不同时,所述输入设备76也可能不同。例如,当所述客户端为PC时,所述输入设备76可以为鼠标、键盘等输入设备76;当所述终端为智能手机、平板电脑等便携设备时,所述输入设备76可以为触控屏。
本实施例中,所述存储器73中存储有计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于本发明实施例所述的信息处理方法。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例 如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本发明各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本发明上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或 部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。
工业实用性
本发明实施例通过部署在图形用户界面中的角色选择区域的角色器对象中的窗口位,对与用户角色对象属于同一群组的第二角色对象相关联的角色操作对象在对应的窗口位中进行渲染,便于用户能够通过针对所述角色操作对象的视野获取手势快速获得对应的第二角色对象的视野图像,大大提升了用户在交互过程中的操作体验。

Claims (22)

  1. 一种信息处理方法,通过在终端的处理器上执行软件应用并在所述终端的显示器上进行渲染,以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏系统上被实施;所述方法包括:
    在所述图形用户界面上渲染出至少一个虚拟资源对象;
    部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;
    检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,在所述图形用户界面上渲染出与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像。
  2. 根据权利要求1所述的方法,其中,所述检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,所述方法还包括:
    生成并发送第一指令,所述第一指令用于调用所述至少一个角色操作对象相关联的虚拟镜头以及控制所述虚拟镜头采集视野图像;
    以及在所述视野获取手势的检测过程中,获得所述虚拟镜头采集到的视野图像。
  3. 根据权利要求2所述的方法,其中,当所述视野获取手势终止时,所述方法还包括:
    生成第二指令,基于所述第二指令终止对所述至少一个角色操作对象相关联的虚拟镜头的调用。
  4. 根据权利要求1所述的方法,其中,所述方法还包括:连续记录所述图形用户界面中的用户角色对象的状态属性的改变,生成所述用户角色对象的状态属性信息,将所述状态属性信息同步更新到服务器。
  5. 根据权利要求1所述的方法,其中,所述方法还包括:连续记录所述图形用户界面中的用户角色对象的技能属性的改变,确定所述用户角色 对象的技能属性达到预设条件时,生成所述用户角色对象的技能属性信息,将所述技能属性信息同步更新到服务器。
  6. 根据权利要求4所述的方法,其中,所述方法还包括:从所述服务器中获得所述至少一个角色操作对象相关联的至少一个角色对象的状态属性信息,将所述状态属性信息按第一预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位中进行渲染。
  7. 根据权利要求5所述的方法,其中,所述方法还包括:从所述服务器中获得所述至少一个角色操作对象相关联的至少一个角色对象的技能属性信息,将所述技能属性信息按第二预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位进行渲染。
  8. 一种终端,所述终端包括:渲染处理单元、部署单元、检测单元和操作执行单元;其中,
    所述渲染处理单元,配置为执行软件应用并进行渲染得到图形用户界面;在所述图形用户界面上渲染出至少一个虚拟资源对象;还配置为在所述图形用户界面上渲染出所述操作执行单元获得的所述与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像;
    所述部署单元,配置为部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;
    所述检测单元,配置为检测到对所述角色器对象中至少一个角色操作对象的视野获取手势;
    所述操作执行单元,配置为当所述检测单元检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,获得与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像。
  9. 根据权利要求8所述的终端,其中,所述操作执行单元,配置为所述检测单元检测到对所述角色器对象中至少一个角色操作对象的视野获取 手势时,生成并发送第一指令,所述第一指令用于调用所述至少一个角色操作对象相关联的虚拟镜头以及控制所述虚拟镜头采集视野图像;以及所述检测单元在所述视野获取手势的检测过程中,获得所述虚拟镜头采集到的视野图像。
  10. 根据权利要求9所述的终端,其中,所述操作执行单元,还配置为当所述检测单元检测到所述视野获取手势终止时,生成第二指令,基于所述第二指令终止对所述至少一个角色操作对象相关联的虚拟镜头的调用。
  11. 根据权利要求8所述的终端,其中,所述终端还包括通讯单元;
    所述操作执行单元,还配置为连续记录所述图形用户界面中的用户角色对象的状态属性的改变,生成所述用户角色对象的状态属性信息,将所述状态属性信息通过所述通讯单元同步更新到服务器。
  12. 根据权利要求8所述的终端,其中,所述终端还包括通讯单元;
    所述操作执行单元,还配置为连续记录所述图形用户界面中的用户角色对象的技能属性的改变,确定所述用户角色对象的技能属性达到预设条件时,生成所述用户角色对象的技能属性信息,将所述技能属性信息通过所述通讯单元同步更新到服务器。
  13. 根据权利要求11所述的终端,其中,所述操作执行单元,还配置为通过所述通讯单元从所述服务器中获得所述至少一个角色操作对象相关联的至少一个角色对象的状态属性信息;
    相应的,所述渲染处理单元,还配置为将所述操作执行单元获得的状态属性信息按第一预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位中进行渲染。
  14. 根据权利要求12所述的终端,其中,所述操作执行单元,还配置为通过所述通讯单元从所述服务器中获得所述至少一个角色操作对象相关 联的至少一个角色对象的技能属性信息;
    相应的,所述渲染处理单元,还配置为将所述操作执行单元获得的所述技能属性信息按第二预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位进行渲染。
  15. 一种终端,所述终端包括:处理器和显示器;所述处理器,配置为执行软件应用并在所述显示器上进行渲染以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏系统上被实施;
    所述处理器,配置为在所述图形用户界面上渲染出至少一个虚拟资源对象;部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;
    以及,检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,在所述图形用户界面上渲染出与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像。
  16. 根据权利要求15所述的终端,其中,所述处理器,配置为检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,生成并发送第一指令,所述第一指令用于调用所述至少一个角色操作对象相关联的虚拟镜头以及控制所述虚拟镜头采集视野图像;以及在所述视野获取手势的检测过程中,获得所述虚拟镜头采集到的视野图像。
  17. 根据权利要求16所述的终端,其中,所述处理器,还配置为当所述视野获取手势终止时,生成第二指令,基于所述第二指令终止对所述至少一个角色操作对象相关联的虚拟镜头的调用。
  18. 根据权利要求15所述的终端,其中,所述服务器还包括通讯设备;
    所述处理器,还配置为连续记录所述图形用户界面中的用户角色对象的状态属性的改变,生成所述用户角色对象的状态属性信息,通过所述通讯设备将所述状态属性信息同步更新到服务器。
  19. 根据权利要求15所述的终端,其中,所述服务器还包括通讯设备;
    所述处理器,还配置为连续记录所述图形用户界面中的用户角色对象的技能属性的改变,确定所述用户角色对象的技能属性达到预设条件时,生成所述用户角色对象的技能属性信息,通过所述通讯设备将所述技能属性信息同步更新到服务器。
  20. 根据权利要求18所述的终端,其中,所述处理器,还配置为通过所述通讯设备从所述服务器中获得所述至少一个角色操作对象相关联的至少一个角色对象的状态属性信息,将所述状态属性信息按第一预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位中进行渲染。
  21. 根据权利要求19所述的终端,其中,所述处理器,还配置为通过所述通讯设备从所述服务器中获得所述至少一个角色操作对象相关联的至少一个角色对象的技能属性信息,将所述技能属性信息按第二预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位进行渲染。
  22. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行权利要求1至7任一项所述的信息处理方法。
PCT/CN2016/081051 2015-09-29 2016-05-04 一种信息处理方法、终端及计算机存储介质 WO2017054452A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
EP16850077.5A EP3285156B1 (en) 2015-09-29 2016-05-04 Information processing method and terminal, and computer storage medium
CA2985867A CA2985867C (en) 2015-09-29 2016-05-04 Information processing method, terminal, and computer storage medium
KR1020177035385A KR20180005689A (ko) 2015-09-29 2016-05-04 정보 처리 방법, 단말기 및 컴퓨터 저장 매체
JP2017564016A JP6830447B2 (ja) 2015-09-29 2016-05-04 情報処理方法、端末、およびコンピュータ記憶媒体
MYPI2017704330A MY195861A (en) 2015-09-29 2016-05-04 Information Processing Method, Electronic Device, and Computer Storage Medium
US15/725,146 US10661171B2 (en) 2015-09-29 2017-10-04 Information processing method, terminal, and computer storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510633319.2 2015-09-29
CN201510633319.2A CN105159687B (zh) 2015-09-29 2015-09-29 一种信息处理方法、终端及计算机存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/725,146 Continuation-In-Part US10661171B2 (en) 2015-09-29 2017-10-04 Information processing method, terminal, and computer storage medium

Publications (1)

Publication Number Publication Date
WO2017054452A1 true WO2017054452A1 (zh) 2017-04-06

Family

ID=54800554

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/081051 WO2017054452A1 (zh) 2015-09-29 2016-05-04 一种信息处理方法、终端及计算机存储介质

Country Status (8)

Country Link
US (1) US10661171B2 (zh)
EP (1) EP3285156B1 (zh)
JP (1) JP6830447B2 (zh)
KR (1) KR20180005689A (zh)
CN (1) CN105159687B (zh)
CA (1) CA2985867C (zh)
MY (1) MY195861A (zh)
WO (1) WO2017054452A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11811681B1 (en) 2022-07-12 2023-11-07 T-Mobile Usa, Inc. Generating and deploying software architectures using telecommunication resources

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105159687B (zh) 2015-09-29 2018-04-17 腾讯科技(深圳)有限公司 一种信息处理方法、终端及计算机存储介质
CN105653187A (zh) * 2015-12-24 2016-06-08 杭州勺子网络科技有限公司 一种用于多点触摸终端的触摸控制方法
CN106445588B (zh) * 2016-09-08 2018-05-04 腾讯科技(深圳)有限公司 属性信息的更新方法及装置
CN106453638B (zh) 2016-11-24 2018-07-06 腾讯科技(深圳)有限公司 一种应用业务内信息交互方法及系统
WO2018103634A1 (zh) 2016-12-06 2018-06-14 腾讯科技(深圳)有限公司 一种数据处理的方法及移动终端
CN106774907B (zh) * 2016-12-22 2019-02-05 腾讯科技(深圳)有限公司 一种在虚拟场景中调整虚拟对象可视区域的方法及移动终端
CN106598438A (zh) 2016-12-22 2017-04-26 腾讯科技(深圳)有限公司 一种基于移动终端的场景切换方法及移动终端
CN107174824B (zh) * 2017-05-23 2021-01-15 网易(杭州)网络有限公司 特效信息处理方法、装置、电子设备及存储介质
CN107617213B (zh) * 2017-07-27 2019-02-19 网易(杭州)网络有限公司 信息处理方法及装置、存储介质、电子设备
CN107704165B (zh) * 2017-08-18 2020-05-15 网易(杭州)网络有限公司 虚拟镜头的控制方法及装置、存储介质、电子设备
CN108434742B (zh) * 2018-02-02 2019-04-30 网易(杭州)网络有限公司 游戏场景中虚拟资源的处理方法和装置
US10908769B2 (en) * 2018-04-09 2021-02-02 Spatial Systems Inc. Augmented reality computing environments—immersive media browser
CN108804013B (zh) * 2018-06-15 2021-01-15 网易(杭州)网络有限公司 信息提示的方法、装置、电子设备及存储介质
CN108920124B (zh) * 2018-07-25 2020-11-03 腾讯科技(深圳)有限公司 一种信息显示方法、装置及存储介质
CN109331468A (zh) * 2018-09-26 2019-02-15 网易(杭州)网络有限公司 游戏视角的显示方法、显示装置和显示终端
JP7094216B2 (ja) * 2018-12-28 2022-07-01 グリー株式会社 配信ユーザの動きに基づいて生成されるキャラクタオブジェクトのアニメーションを含む動画をライブ配信する動画配信システム、動画配信方法及び動画配信プログラム
JP7202981B2 (ja) 2019-06-28 2023-01-12 グリー株式会社 動画配信システム、プログラム、及び情報処理方法
CN111124226B (zh) * 2019-12-17 2021-07-30 网易(杭州)网络有限公司 游戏画面的显示控制方法、装置、电子设备及存储介质
CN113326212B (zh) * 2020-02-28 2023-11-03 加特兰微电子科技(上海)有限公司 数据处理方法、装置及相关设备
JP7051941B6 (ja) 2020-06-30 2022-05-06 グリー株式会社 端末装置の制御プログラム、端末装置の制御方法、端末装置、サーバ装置の制御方法、一又は複数のプロセッサにより実行される方法、及び配信システム
CN111773703B (zh) * 2020-07-31 2023-10-20 网易(杭州)网络有限公司 游戏对象显示方法、装置、存储介质与终端设备
JP7018617B1 (ja) 2020-12-11 2022-02-14 正啓 榊原 プレイ記録動画作成システム
CN113144604B (zh) * 2021-02-08 2024-05-10 网易(杭州)网络有限公司 游戏角色的信息处理方法、装置、设备及存储介质
CN113318444B (zh) * 2021-06-08 2023-01-10 天津亚克互动科技有限公司 角色的渲染方法和装置、电子设备和存储介质
CN113633963A (zh) * 2021-07-15 2021-11-12 网易(杭州)网络有限公司 游戏控制的方法、装置、终端和存储介质
CN113633989A (zh) * 2021-08-13 2021-11-12 网易(杭州)网络有限公司 游戏对象的显示控制方法、装置和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1743043A (zh) * 2005-06-19 2006-03-08 珠海市西山居软件有限公司 一种网络游戏系统及其实现方法
CN102356373A (zh) * 2009-03-20 2012-02-15 微软公司 虚拟对象操纵
CN102414641A (zh) * 2009-05-01 2012-04-11 微软公司 改变显示环境内的视图视角
CN103365596A (zh) * 2013-07-01 2013-10-23 天脉聚源(北京)传媒科技有限公司 一种控制虚拟世界的方法及装置
US20140152758A1 (en) * 2012-04-09 2014-06-05 Xiaofeng Tong Communication using interactive avatars
CN105159687A (zh) * 2015-09-29 2015-12-16 腾讯科技(深圳)有限公司 一种信息处理方法、终端及计算机存储介质

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000070549A (ja) * 1998-09-01 2000-03-07 Konami Co Ltd ゲームシステム、画像の保存方法及びゲームプログラムが記録された記録媒体
JP3734815B2 (ja) * 2003-12-10 2006-01-11 任天堂株式会社 携帯ゲーム装置及びゲームプログラム
US8316408B2 (en) * 2006-11-22 2012-11-20 Verizon Patent And Licensing Inc. Audio processing for media content access systems and methods
JP5425407B2 (ja) 2008-03-31 2014-02-26 株式会社バンダイナムコゲームス プログラム及びゲーム装置
US9737796B2 (en) * 2009-07-08 2017-08-22 Steelseries Aps Apparatus and method for managing operations of accessories in multi-dimensions
JP4977248B2 (ja) * 2010-12-10 2012-07-18 株式会社コナミデジタルエンタテインメント ゲーム装置及びゲーム制御プログラム
CN103107982B (zh) * 2011-11-15 2016-04-20 北京神州泰岳软件股份有限公司 一种互联网络中群组成员互动的方法和系统
US8535163B2 (en) * 2012-01-10 2013-09-17 Zynga Inc. Low-friction synchronous interaction in multiplayer online game
US8954890B2 (en) * 2012-04-12 2015-02-10 Supercell Oy System, method and graphical user interface for controlling a game
US8814674B2 (en) * 2012-05-24 2014-08-26 Supercell Oy Graphical user interface for a gaming system
US9403090B2 (en) * 2012-04-26 2016-08-02 Riot Games, Inc. Video game system with spectator mode hud
JP2017104145A (ja) * 2014-03-07 2017-06-15 株式会社ソニー・インタラクティブエンタテインメント ゲームシステム、表示制御方法、表示制御プログラム及び記録媒体
JP6025806B2 (ja) * 2014-11-17 2016-11-16 株式会社ソニー・インタラクティブエンタテインメント 装置、表示制御方法、プログラム及び情報記憶媒体
US20180288354A1 (en) * 2017-03-31 2018-10-04 Intel Corporation Augmented and virtual reality picture-in-picture

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1743043A (zh) * 2005-06-19 2006-03-08 珠海市西山居软件有限公司 一种网络游戏系统及其实现方法
CN102356373A (zh) * 2009-03-20 2012-02-15 微软公司 虚拟对象操纵
CN102414641A (zh) * 2009-05-01 2012-04-11 微软公司 改变显示环境内的视图视角
US20140152758A1 (en) * 2012-04-09 2014-06-05 Xiaofeng Tong Communication using interactive avatars
CN103365596A (zh) * 2013-07-01 2013-10-23 天脉聚源(北京)传媒科技有限公司 一种控制虚拟世界的方法及装置
CN105159687A (zh) * 2015-09-29 2015-12-16 腾讯科技(深圳)有限公司 一种信息处理方法、终端及计算机存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11811681B1 (en) 2022-07-12 2023-11-07 T-Mobile Usa, Inc. Generating and deploying software architectures using telecommunication resources

Also Published As

Publication number Publication date
CN105159687A (zh) 2015-12-16
EP3285156B1 (en) 2021-02-17
JP2018525050A (ja) 2018-09-06
EP3285156A4 (en) 2018-12-12
KR20180005689A (ko) 2018-01-16
US10661171B2 (en) 2020-05-26
CA2985867C (en) 2021-09-28
JP6830447B2 (ja) 2021-02-17
US20180028916A1 (en) 2018-02-01
MY195861A (en) 2023-02-24
CA2985867A1 (en) 2017-04-06
EP3285156A1 (en) 2018-02-21
CN105159687B (zh) 2018-04-17

Similar Documents

Publication Publication Date Title
WO2017054452A1 (zh) 一种信息处理方法、终端及计算机存储介质
WO2017054450A1 (zh) 一种信息处理方法、终端和计算机存储介质
WO2017054453A1 (zh) 一种信息处理方法、终端及计算机存储介质
US11003261B2 (en) Information processing method, terminal, and computer storage medium
JP2020039880A (ja) 情報処理方法、端末及びコンピュータ記憶媒体
WO2017054464A1 (zh) 一种信息处理方法、终端及计算机存储介质
WO2017054466A1 (zh) 一种信息处理方法、终端及计算机存储介质
JP2018517533A (ja) 情報処理方法、端末、及びコンピュータ記憶媒体
WO2018090909A1 (zh) 一种基于移动终端的对象扫描方法及移动终端
WO2019104533A1 (zh) 一种视频播放方法及装置
WO2019165580A1 (zh) 移动设备控制方法以及控制终端
CN116159308A (zh) 游戏交互方法、装置、计算机设备和计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16850077

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2985867

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 20177035385

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017564016

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE