CN113440848B - In-game information marking method and device and electronic device - Google Patents

In-game information marking method and device and electronic device Download PDF

Info

Publication number
CN113440848B
CN113440848B CN202110795947.6A CN202110795947A CN113440848B CN 113440848 B CN113440848 B CN 113440848B CN 202110795947 A CN202110795947 A CN 202110795947A CN 113440848 B CN113440848 B CN 113440848B
Authority
CN
China
Prior art keywords
game
marking
control
target
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110795947.6A
Other languages
Chinese (zh)
Other versions
CN113440848A (en
Inventor
贺长江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110795947.6A priority Critical patent/CN113440848B/en
Publication of CN113440848A publication Critical patent/CN113440848A/en
Application granted granted Critical
Publication of CN113440848B publication Critical patent/CN113440848B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5378Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1068Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to detect the point of contact of the player on a surface, e.g. floor mat, touch pad
    • A63F2300/1075Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to detect the point of contact of the player on a surface, e.g. floor mat, touch pad using a touch screen

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a method and a device for marking information in a game and an electronic device. The method comprises the following steps: responding to a first touch operation of moving a first control from an initial position to a marking position in a graphical user interface of a first client, and generating a second control in the graphical user interface of the first client, wherein the marking position corresponds to a target position of a virtual model to be marked in a game scene, and the second control is used for providing a plurality of marking options to be selected corresponding to the target position, and the marking options to be selected contain corresponding marking contents; determining a target marking option from a plurality of marking options to be selected in response to a second touch operation executed by the second control; the markup content of the target markup option is displayed on a graphical user interface of the second client and the first location identity is displayed at the target location. The method solves the technical problems of higher operation complexity and lower marking accuracy of the information marking mode in the game provided by the related technology.

Description

In-game information marking method and device and electronic device
Technical Field
The present invention relates to the field of computers, and in particular, to a method and an apparatus for marking information in a game, and an electronic apparatus.
Background
At present, a third person provided in the related art calls a visual angle hand game or performs game communication in a mode of voice communication, input frame typing and the like in the online fight type hand game. However, the manner of marking specific coordinate positions in the game map and transmitting the marking information to teammates is limited, and mainly includes the following implementation manners:
in the first mode, the game player marks a specific coordinate position in the area by clicking the thumbnail map for synchronization to teammates. Fig. 1 is a schematic diagram of an in-game information marking method according to the related art, as shown in fig. 1, marking a specific coordinate position by clicking a thumbnail map of an upper left corner with a finger of a game player and transmitting marking information to teammates.
However, a significant drawback of this approach is that:
(1) The screen of the thumbnail map is smaller, and the finger operation difficulty is higher.
(2) It is difficult to mark precise specific coordinate positions in the thumbnail map.
(3) For the special effect coordinate position, the type and the content of the mark information are too single, and prompt information which a player wants to transmit to teammates cannot be accurately expressed.
And in the second mode, the game player enlarges the thumbnail map, and then marks the specific coordinate position in the area on the enlarged thumbnail map and synchronizes the specific coordinate position to teammates. Fig. 2 is a schematic diagram of another in-game information marking method according to the related art, as shown in fig. 2, a game player marks a specific coordinate position in the area in the enlarged thumbnail map to be synchronized to teammates after enlarging the thumbnail map in the upper left corner.
However, a significant drawback of this approach is that:
(1) Although the thumbnail map is enlarged, it is still difficult to mark a precise specific coordinate position in the thumbnail map.
(2) The operations of enlarging the thumbnail map and marking the specific coordinate position in the region on the enlarged thumbnail map are required to be continuously performed a plurality of times, and the operation difficulty is high.
(3) The marked coordinate positions can be synchronously notified to all teammates, and teammates cannot judge the real-time azimuth information of the teammates so as to select teammates needing to be notified in a targeted manner, so that invalid information is increased, and the game experience is affected.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
At least some embodiments of the present invention provide a method, an apparatus, and an electronic device for marking in-game information, so as to at least solve the technical problems of higher operation complexity and lower marking accuracy of the in-game information marking method provided in the related art.
According to one embodiment of the present invention, there is provided an in-game information marking method, the game including: the method for marking information in a game comprises the steps of providing a graphical user interface through a first client and a second client respectively, wherein the graphical user interface displays content comprising at least part of game scenes and first controls, and the method comprises the steps of:
Responding to a first touch operation of moving a first control from an initial position to a marking position in a graphical user interface of a first client, and generating a second control in the graphical user interface of the first client, wherein the marking position corresponds to a target position of a virtual model to be marked in a game scene, and the second control is used for providing a plurality of marking options to be selected corresponding to the target position, and the marking options to be selected contain corresponding marking contents; determining a target marking option from a plurality of marking options to be selected in response to a second touch operation executed by the second control; the markup content of the target markup option is displayed on a graphical user interface of the second client and the first location identity is displayed at the target location.
Optionally, the second game character has a plurality of characters; the content displayed by the graphical user interface also comprises a thumbnail map; the method for marking the information in the game further comprises the following steps: responding to the sub-touch operation that the touch point of the first touch operation moves from an initial position to a marked position and no position change occurs within a preset time, magnifying and displaying a thumbnail map of the first client, and displaying a third control on a graphical user interface of the first client, wherein the thumbnail map at least displays position information of a plurality of second game roles, and the third control is used for providing a plurality of role identification options to be selected corresponding to the plurality of second game roles; determining a target character identification option from a plurality of character identification options to be selected and a target second game character corresponding to the target character identification option in response to a third touch operation executed by a third control; and displaying a second position identifier on a graphical user interface of a second client corresponding to the target game character, wherein the second position identifier is used for identifying the relative azimuth between the position of the target game character and the target position.
Optionally, the method for marking information in a game further comprises: and responding to a fourth touch operation executed by the first control, and controlling the first control to return to the initial position from the marking position to finish the in-game information marking operation, wherein the fourth touch operation and the first touch operation are continuously executed.
Optionally, the method for marking information in a game further comprises: and controlling the first control to automatically resume to be displayed at the initial position.
Optionally, the method for marking information in a game further comprises: and responding to a fifth touch operation executed in the blank touch area outside the second control, closing the second control to finish the in-game information marking operation.
Optionally, the method for marking information in a game further comprises: responding to a sixth touch operation executed by the thumbnail map of the first client, and adjusting the game scene range displayed by the graphical user interface of the first client; the marker position is determined from the adjusted game scene range.
Optionally, the second control is a window control, and the window control includes: and the touch control areas respectively correspond to different marking options, and the content of the marking options adopts system preset content or user-defined content.
There is also provided, in accordance with an embodiment of the present invention, an in-game information marking apparatus, the game including: a first game character controlled by a first client and a second game character controlled by a second client, the first game character and the second game character belonging to the same character lineup, a graphical user interface is provided by the first client and the second client respectively, the content displayed by the graphical user interface comprises at least part of game scene and a first control, the in-game information marking device comprises:
the control module is used for responding to a first touch operation of moving a first control from an initial position to a marking position in a graphical user interface of a first client, generating a second control in the graphical user interface of the first client, wherein the marking position corresponds to a target position of a virtual model to be marked in a game scene, the second control is used for providing a plurality of marking options to be selected corresponding to the target position, and the marking options to be selected contain corresponding marking contents; the determining module is used for responding to a second touch operation executed by the second control and determining a target marking option from a plurality of marking options to be selected; and the display module is used for displaying the marking content of the target marking option on the graphical user interface of the second client and displaying the first position identifier at the target position.
Optionally, the second game character has a plurality of characters; the content displayed by the graphical user interface also comprises a thumbnail map; the in-game information marking apparatus further includes: the processing module is used for responding to the sub-touch operation that the touch point of the first touch operation moves from the initial position to the marking position and does not change in position within the preset time, amplifying and displaying a thumbnail map of the first client, and displaying a third control on a graphical user interface of the first client, wherein the thumbnail map at least displays the position information of a plurality of second game roles, and the third control is used for providing a plurality of role identification options to be selected corresponding to the plurality of second game roles; determining a target character identification option from a plurality of character identification options to be selected and a target second game character corresponding to the target character identification option in response to a third touch operation executed by a third control; and displaying a second position identifier on a graphical user interface of a second client corresponding to the target game character, wherein the second position identifier is used for identifying the relative azimuth between the position of the target game character and the target position.
Optionally, the control module is further configured to control the first control to return from the marking position to the initial position in response to a fourth touch operation performed by the first control to end the in-game information marking operation, where the fourth touch operation and the first touch operation are continuously performed.
Optionally, the control module is further configured to control the first control to automatically resume displaying at the initial position.
Optionally, the in-game information marking apparatus further includes: and the closing module is used for responding to the fifth touch operation executed in the blank touch area outside the second control, closing the second control and ending the information marking operation in the game.
Optionally, the in-game information marking apparatus further includes: the adjusting module is used for responding to a sixth touch operation executed by the thumbnail map of the first client and adjusting the game scene range displayed by the graphical user interface of the first client; and the determining module is used for determining the mark position from the adjusted game scene range.
Optionally, the second control is a window control, and the window control includes: and the touch control areas respectively correspond to different marking options, and the content of the marking options adopts system preset content or user-defined content.
According to one embodiment of the present invention, there is also provided a nonvolatile storage medium, characterized in that a computer program is stored in the storage medium, wherein the computer program is configured to execute the in-game information marking method in any one of the above-mentioned items when running.
According to one embodiment of the present invention, there is also provided a processor, wherein the processor is configured to run a program, wherein the program is configured to execute the in-game information marking method of any one of the above.
According to one embodiment of the present invention, there is also provided an electronic device including a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the in-game information marking method in any one of the above.
In at least some embodiments of the present invention, a first touch operation of moving a first control from an initial position to a marking position in a graphical user interface of a first client is adopted, a second control is generated in the graphical user interface of the first client, the marking position corresponds to a target position of a virtual model to be marked in a game scene, the second control is used for providing a plurality of marking options to be selected corresponding to the target position, the marking options to be selected include corresponding marking contents, the target marking options are determined from the plurality of marking options to be selected in response to the second touch operation performed by the second control, and the marking contents of the target marking options are displayed on the graphical user interface of the second client and a first position identifier is displayed at the target position, so that the purposes of improving the operation complexity and accuracy of an in-game information marking mode by using the newly added marking position control and marking content control are achieved, the technical effects of reducing the operation complexity of the in-game information marking mode, improving the marking accuracy, marking convenience and marking diversity are achieved, and the technical problems of high operation complexity and low accuracy of the in-game information marking mode provided in related technologies are further solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
FIG. 1 is a schematic diagram of an in-game information marking method according to the related art;
FIG. 2 is a schematic diagram of another intra-game information tagging approach according to the related art;
FIG. 3 is a block diagram showing a hardware configuration of a mobile terminal of an in-game information marking method according to an embodiment of the present invention;
FIG. 4 is a flow chart of a method of in-game information tagging according to one embodiment of the present invention;
FIG. 5 is a schematic diagram of acquiring a marker position according to an alternative embodiment of the present invention;
FIG. 6 is a schematic diagram of acquiring tag content according to an alternative embodiment of the present invention;
FIG. 7 is a schematic diagram of the synchronized display of marker positions and marker content in a thumbnail map and game scene in accordance with an alternative embodiment of the present invention;
FIG. 8 is a schematic diagram of determining a target character from a plurality of character identification options to be selected to a target location in accordance with an alternative embodiment of the invention;
FIG. 9 is a schematic diagram of marking arbitrary coordinate positions outside of the original scene range displayed by a graphical user interface according to an alternative embodiment of the present invention;
FIG. 10 is a block diagram of an in-game information marking apparatus according to one embodiment of the present invention;
fig. 11 is a block diagram showing the construction of an in-game information marking apparatus according to an alternative embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to one embodiment of the present invention, there is provided an embodiment of an in-game information marking method, it being noted that the steps shown in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that shown or described herein.
The method embodiments may be performed in a mobile terminal, a computer terminal, or similar computing device. Taking the example of running on a mobile terminal, the mobile terminal can be a terminal device such as a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palm computer, a mobile internet device (Mobile Internet Devices, abbreviated as MID), a PAD, a game console, etc. Fig. 3 is a block diagram showing a hardware configuration of a mobile terminal of an in-game information marking method according to an embodiment of the present invention. As shown in fig. 3, the mobile terminal may include one or more (only one is shown in fig. 3) processors 102 (the processor 102 may include, but is not limited to, a Central Processing Unit (CPU), a Graphics Processor (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory 104 for storing data. Optionally, the mobile terminal may further include a transmission device 106, an input-output device 108, and a display device 110 for communication functions. It will be appreciated by those skilled in the art that the structure shown in fig. 3 is merely illustrative and not limiting on the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 3, or have a different configuration than shown in fig. 3.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to the in-game information marking method in the embodiment of the present invention, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, implement the in-game information marking method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the mobile terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
The input in the input output device 108 may come from a plurality of human interface devices (Human Interface Device, abbreviated as HIDs). For example: keyboard and mouse, gamepad, other special game controllers (e.g., steering wheel, fishing pole, dance mat, remote control, etc.). Part of the ergonomic interface device may provide output functions in addition to input functions, such as: force feedback and vibration of the gamepad, audio output of the controller, etc.
The display device 110 may be, for example, a head-up display (HUD), a touch screen type Liquid Crystal Display (LCD), and a touch display (also referred to as a "touch screen" or "touch display"). The liquid crystal display may enable a user to interact with a user interface of the mobile terminal. In some embodiments, the mobile terminal has a Graphical User Interface (GUI), and the user may interact with the GUI by touching finger contacts and/or gestures on the touch-sensitive surface, where the man-machine interaction functions optionally include the following interactions: executable instructions for performing the above-described human-machine interaction functions, such as creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, sending and receiving electronic mail, talking interfaces, playing digital video, playing digital music, and/or web browsing, are configured/stored in a computer program product or readable storage medium executable by one or more processors.
In this embodiment, there is provided a method for marking information in a game running on the mobile terminal, the game including: a first game character controlled by a first client and a second game character controlled by a second client, the first game character and the second game character belonging to the same character lineup, a graphical user interface is provided by the first client and the second client, respectively, the content displayed by the graphical user interface comprises at least part of a game scene and a first control, fig. 4 is a flowchart of a method for marking in-game information according to one embodiment of the present invention, as shown in fig. 4, the method comprises the steps of:
step S40, responding to a first touch operation of moving a first control from an initial position to a marking position in a graphical user interface of a first client, and generating a second control in the graphical user interface of the first client, wherein the marking position corresponds to a target position of a virtual model to be marked in a game scene, the second control is used for providing a plurality of marking options to be selected corresponding to the target position, and the marking options to be selected contain corresponding marking contents;
the first control is a newly added mark position control in the graphical user interface and is used for determining the mark position in the game scene. The initial position of the first control may be any display position in the graphical user interface that does not conflict with existing controls (e.g., directional controls, skill controls, etc.), such as: an upper right region of the graphical user interface, a center region of the graphical user interface, a right region of the graphical user interface, and the like. The first touch operation may include, but is not limited to: a sliding (drag) operation, a continuous operation of a heavy pressing operation and a sliding operation, a continuous operation of a long pressing operation and a sliding operation, and the like.
World space is a three-dimensional space that can be used to describe absolute locations, i.e., locations in the world coordinate system. In a general case, the origin position of the world space is located at the center of the game space. The camera space is a three-dimensional space. In the camera space, the virtual camera is usually located at the origin of the camera space, and the coordinate axes can be arbitrarily selected. For example: the +x axis points to the right of the virtual camera, +y axis points to the top of the virtual camera, +z axis points to the back of the virtual camera. Under the default view angle of the three-dimensional game scene shot by the virtual camera, the representation of three coordinate axes of the camera space in the world space can be calculated, then the transformation matrix from the camera space to the world space is obtained by constructing the transformation matrix, and then the transformation matrix is inverted to obtain the transformation matrix from the world space to the camera space. The screen space is a two-dimensional space, and coordinate points in the camera space can be converted from the camera space to the clipping space by using the projection matrix, and then projected from the clipping space to the screen space to generate corresponding 2D coordinates. Conversely, on the premise that the mark position in the screen space is known, the coordinate position of the virtual model (for example, a virtual grass) corresponding to the mark position in the world space can be determined by the inverse transformation method of the screen space, the clipping space, the camera space and the world space.
FIG. 5 is a schematic diagram of acquiring a marker position according to an alternative embodiment of the present invention, as shown in FIG. 5, wherein a game player controls a first control to move from an initial position to an arbitrary coordinate position in a game scene by performing a sliding (dragging) operation on the first control to obtain the marker position. For example: the initial position corresponds to virtual grass B in the game scene, and the marker position corresponds to virtual grass A in the game scene. The game player controls the first control to move from virtual grass B to virtual grass a by performing a sliding (dragging) operation on the first control.
It should be noted that, the display state of the first control may be either a completely opaque state or a semitransparent state. In the case where the display state of the first control is a semitransparent state, when the game player performs a sliding (dragging) operation on the first control, the display state of the first control will be switched from the semitransparent state to a completely opaque state; when the game player does not perform a sliding (dragging) operation on the first control, the display state of the first control will remain semi-transparent, so as to reduce the shielding of the game scene picture as much as possible.
In an alternative embodiment, the second control may be generated at an associated location of the marker location.
The associated position of the marking position may be a position adjacent to the marking position, for example: upper left position of the mark position, upper right position of the mark position, and the like. The second control is a mark content control and is used for determining mark content in the game scene. When the game player ends performing the first touch operation at the mark position, a second control is generated at the associated position of the mark position so as to provide a plurality of mark options to be selected. That is, the default display state of the second control is a hidden state, and when the game player finishes executing the first touch operation at the marker position, the display state of the second control at the associated position is switched from the hidden state to the semi-transparent state or the completely opaque state.
In an optional embodiment, the second control is a Tips window control, where the Tips window control includes: a plurality of touch areas. Each touch area in the plurality of touch areas corresponds to a different marking option. Each marking option may correspond to a different Identification (ID) or may correspond to a different marking type. The mark types comprise early warning types, help seeking types, attack types, collection types and the like. For example, the early warning type marking options may be "careless grass a", "danger at position C", etc., and the help seeking type marking options may be "defensive tower D needs support", etc. The content of each marking option can be either system preset content or game player custom content.
FIG. 6 is a schematic diagram of acquiring marker content according to an alternative embodiment of the present invention, as shown in FIG. 6, where the marker content control is generated at an upper left position of the marker position when a player's finger leaves the marker content control. The markup content control includes 3 touch areas corresponding to markup option 1 (e.g., a person in virtual grass A), markup option 2 (e.g., coming from a virtual grass A collection), and markup option 3 (e.g., a careless virtual grass A), respectively. It should be noted that, the number of the marking options included in the marking content control may be flexibly set according to the actual game scene requirement. That is, the markup content controls described above can contain a greater or lesser number of markup options.
Step S41, in response to a second touch operation executed on a second control, determining a target mark option from a plurality of mark options to be selected;
the second touch operation may include, but is not limited to: click operation, slide operation, press operation, long press operation, etc. The game player determines a target mark option from a plurality of mark options to be selected by performing a second touch operation on the second control. As also shown in FIG. 6, the game player determines the target marking option as marking option 1 from marking option 1 (e.g., person in virtual grass A), marking option 2 (e.g., coming from virtual grass A set), and marking option 3 (e.g., careless virtual grass A) by performing a click operation on the marking content control. That is, the game player wishes to send "virtual grass A someone-! "tag content.
It should be noted that the target marking options may include one marking option or multiple marking options of a marking type, so as to ensure that the target marking options can transmit richer prompt information. For example: for the virtual model of the grass, the marking options of the early warning type, the collection type and the attack type can be respectively corresponding, and the game player can determine the target marking option from the marking option 1 of the attack type (for example, a person in the virtual grass A initiates an attack), the marking option 2 of the collection type (for example, coming to the virtual grass A collection) and the marking option 3 of the early warning type (for example, taking care to the virtual grass A) by clicking on the marking content control. Through the embodiment, the player can select the proper marking option for the marking position according to the game situation, so that the problem that accurate prompt information cannot be sent to teammates due to single prompt content is avoided.
Step S42, displaying the mark content of the target mark option on the graphical user interface of the second client, and displaying the first location identifier at the target location.
After determining the target marking option from the marking options to be selected, marking content of the target marking option may be displayed on a graphical user interface of the second client and the first location identification may be displayed at the target location. The display mode can adopt warning special effects (such as highlighting, flashing and the like).
In an alternative embodiment, the content displayed by the graphical user interface may further include a thumbnail map (i.e., a small map), and the first location identifier and the marker content may be displayed in the thumbnail map and the game scene of the first client, and the first location identifier and the marker content may be displayed in synchronization in the thumbnail map and the game scene of the second client. The second client is a game client controlled by part or all teammates of the first client.
Fig. 7 is a schematic diagram showing the synchronous display of the marking position and the marking content in the thumbnail map and the game scene according to an alternative embodiment of the present invention, as shown in fig. 7, the marking position (e.g., virtual grass a) and the marking content (e.g., people in virtual grass a) are displayed in the game scene, and the marking content (e.g., warning special effects) is displayed in the thumbnail map at the position corresponding to the marking position of the game scene (e.g., the lower right area of the thumbnail map).
Through the steps, a first touch operation of responding to moving the first control from the initial position to the marking position in the graphical user interface of the first client can be adopted, a second control is generated in the graphical user interface of the first client, the marking position corresponds to the target position of the virtual model to be marked in the game scene, the second control is used for providing a plurality of marking options to be selected corresponding to the target position, the marking options to be selected contain corresponding marking contents, the target marking options are determined from the plurality of marking options to be selected through the second touch operation executed by the second control, and the marking content of the target marking options is displayed on the graphical user interface of the second client and the first position mark is displayed at the target position, so that the purposes of improving the operation complexity and the accuracy of the in-game information marking mode by utilizing the newly added marking position control and the marking content control are achieved, the technical effects of reducing the operation complexity of the in-game information marking mode, improving the marking accuracy, the marking convenience and the marking diversity are achieved, and the technical problems of high operation complexity and low accuracy of the in-game information marking mode provided in the related technology are further solved.
Optionally, the second game character has a plurality of characters; the content displayed by the graphical user interface also comprises a thumbnail map; the in-game information marking method may further include the following steps:
step S43, responding to the sub-touch operation that the touch point of the first touch operation moves from the initial position to the mark position and does not change in position within the preset time, amplifying and displaying a thumbnail map of the first client, and displaying a third control on a graphical user interface of the first client, wherein the thumbnail map at least displays position information of a plurality of second game roles, and the third control is used for providing a plurality of role identification options to be selected corresponding to the plurality of second game roles;
step S44, in response to a third touch operation executed on the third control, determining a target character identification option from the plurality of character identification options to be selected, and a target second game character corresponding to the target character identification option;
step S45, displaying a second position identifier on a graphical user interface of a second client corresponding to the target game role, wherein the second position identifier is used for identifying the relative orientation between the position of the target game role and the target position.
When the touch point of the first touch operation moves from the initial position to the marking position and no position change occurs within the preset time, the thumbnail map of the first client can be displayed in an enlarged mode and a third control can be displayed on the graphical user interface of the first client. At least the position information of the plurality of second game characters is displayed in the thumbnail map. The third control is for providing a plurality of character identification options to be selected (e.g., game character avatars) corresponding to the plurality of second game characters. And by executing a third touch operation on the third control, the target character identification option and the target second game character corresponding to the target character identification option can be determined from the plurality of character identification options to be selected. Optionally, the third control is a Tips window control, for example: the third control can be a virtual wheel disc control or a drop-down window control; and displaying the second position identification on the graphical user interface of the second client corresponding to the target game role. The second location identifier is used to identify a relative position between the location of the target game character and the target location. At this time, the first location identifier and the second location identifier may be simultaneously displayed on the graphical user interface of the second client corresponding to the target game character, and the first location identifiers may be separately displayed on the graphical user interfaces of the second clients corresponding to the remaining second game characters.
In addition, in addition to the above-described second location identifier, a distance identifier may be further displayed. In particular, when the target location in the game scene is far and beyond the scope of the game scene displayed by the current graphical user interface, a distance identifier corresponding to the second location identifier may be displayed at an edge of the graphical user interface.
FIG. 8 is a schematic diagram of determining a target character from a plurality of character identification options to be selected to a target location according to an alternative embodiment of the present invention, as shown in FIG. 8, a game player A controlling a first game character requests support from a plurality of second game characters and drags a first control to the target location. At this time, it is necessary to enlarge and display the thumbnail map of the first client manipulated by the game player a, and display the third control on the graphical user interface of the first client. The third control is provided with a character head B of a game player B controlling the second game character 1, a character head C of a game player C controlling the second game character 2, and a character head D of a game player D controlling the second game character 3. Game player a may observe the location of a nearby second game character in the thumbnail map and select character avatar D in the third control. The client of the selected target game character displays a second position mark on the game interface in addition to the first position mark in the thumbnail map so as to prompt the relative azimuth between the position of the target game character and the target position. Therefore, the player can select the target game role to be notified, invalid information is prevented from being transmitted to other game roles, and meanwhile, the target game role can conveniently and accurately go to the target position in time through the second position identifier displayed on the game interface.
Optionally, the method for marking information in a game may further include the following steps:
in step S46, in response to the fourth touch operation performed on the first control, the first control is controlled to return from the marking position to the initial position to end the in-game information marking operation, wherein the fourth touch operation and the first touch operation are continuously performed.
The fourth touch operation may include, but is not limited to: a sliding (drag) operation, a continuous operation of a heavy pressing operation and a sliding operation, a continuous operation of a long pressing operation and a sliding operation, and the like. When the game player wishes to end the in-game information marking operation, the game player can control the first control to return from the marking position to the initial position by performing a fourth touch operation on the first control to end the in-game information marking operation. The fourth touch operation and the first touch operation are continuously executed operations. That is, after the game player moves the first control from the initial position to the mark position by performing a sliding (dragging) operation on the first control, the finger of the game player does not leave the first control, but continues to perform a sliding (dragging) operation on the first control, and returns the first control from the mark position to the initial position to end the in-game information marking operation, thereby increasing flexibility of the in-game information marking manner.
It should be noted that, the fourth touch operation is performed on the first control to control the first control to return from the marking position to the initial position so as to end the information marking operation in the game, which is mainly suitable for the situation that the game player performs the misoperation on the first control. That is, the intention of the game player is not to perform a sliding (dragging) operation on the first control to complete the in-game information marking operation. At this time, the finger of the game player does not leave the first control, but continues to perform a sliding (dragging) operation on the first control, returning the first control from the marking position to the initial position to end the in-game information marking operation. If the game player fails to timely return the first control from the marking position to the initial position, the gaming system will incorrectly record the marking position and continue to execute the subsequent marking process, thereby causing erroneous interference to the remaining game players.
Optionally, the method for marking information in a game may further include the following steps:
step S47, the first control is controlled to automatically resume to be displayed at the initial position.
After the marking position and the marking content are synchronously displayed on the graphical user interface of the first client and the graphical user interface of the second client, the first control does not need to stay at the marking position any more, but can automatically resume to be displayed at the initial position to wait for the next marking position to be determined again, so that the flexibility of the information marking mode in the game can be enhanced, and the shielding of a game scene picture can be avoided.
Optionally, the method for marking information in a game may further include the following steps:
and step S48, closing the second control to finish the in-game information marking operation in response to the fifth touch operation executed on the blank touch area outside the second control.
The fifth touch operation may include, but is not limited to: click operation, slide operation, press operation, long press operation, etc. When the game player ends performing the first touch operation at the mark position, a second control is generated at the associated position of the mark position so as to provide a plurality of mark options to be selected. For example: when the game player's finger leaves the mark content control, the mark content control is generated at an upper left position of the mark position. The markup content control includes 3 touch areas corresponding to markup option 1 (e.g., a person in virtual grass A), markup option 2 (e.g., coming from a virtual grass A collection), and markup option 3 (e.g., a careless virtual grass A), respectively. At this time, if the game player decides not to send the mark content any more, a clicking operation may be performed on the blank touch area outside the Tips window control to close the second control to end the in-game information marking operation, and the first control does not need to stay at the mark position any more, but may automatically resume to be displayed at the initial position to wait for the next mark position to be determined again, thereby increasing flexibility of the in-game information marking mode.
Optionally, the method for marking information in a game may further include the following steps:
step S49, responding to a sixth touch operation executed on the thumbnail map of the first client, and adjusting the game scene range displayed by the graphical user interface of the first client;
step S50, determining the mark position from the adjusted game scene range.
The sixth touch operation may include, but is not limited to: a sliding (drag) operation, a continuous operation of a heavy pressing operation and a sliding operation, a continuous operation of a long pressing operation and a sliding operation, and the like. The left hand of the game player continuously adjusts the game scene range displayed by the graphical user interface of the first client by performing a sliding operation on the thumbnail map of the first client, and the right hand of the game player controls the first control to move from the initial position to any coordinate position in the game scene to obtain the marking position by performing a sliding (dragging) operation on the first control, so that any coordinate position outside the original scene range displayed by the graphical user interface can be marked.
FIG. 9 is a schematic diagram of marking arbitrary coordinate positions outside of an original scene range displayed by a graphical user interface according to an alternative embodiment of the present invention, wherein the left hand of a game player continuously adjusts the game scene range displayed by the graphical user interface by performing a sliding operation on a thumbnail map, while the right hand of the game player controls the first control to move from an initial position to an arbitrary coordinate position in the game scene by performing a sliding (dragging) operation on the first control to obtain a marked position, as shown in FIG. 9.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
There is also provided in this embodiment an in-game information marking apparatus, the game including: the device is used for realizing the above embodiment and the preferred embodiment, and is not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 10 is a block diagram showing the construction of an in-game information marking apparatus according to one embodiment of the present invention, as shown in fig. 10, comprising: the control module 10 is configured to generate, in response to a first touch operation of moving a first control from an initial position to a marking position in a graphical user interface of a first client, a second control in the graphical user interface of the first client, where the marking position corresponds to a target position of a virtual model to be marked in a game scene, and the second control is configured to provide a plurality of marking options to be selected corresponding to the target position, where the marking options to be selected include corresponding marking content; a determining module 20, configured to determine a target mark option from a plurality of mark options to be selected in response to a second touch operation performed by the second control; and a display module 30 for displaying the marking content of the target marking option on the graphical user interface of the second client and displaying the first location identifier at the target location.
Optionally, the second game character has a plurality of characters; the content displayed by the graphical user interface also comprises a thumbnail map; fig. 11 is a block diagram showing the construction of an in-game information marking apparatus according to an alternative embodiment of the present invention, as shown in fig. 11, which includes, in addition to all the blocks shown in fig. 10, the in-game information marking apparatus described above further including: the processing module 40 is configured to respond to a sub-touch operation that a touch point of the first touch operation moves from an initial position to a mark position and no position change occurs within a preset time, and to enlarge and display a thumbnail map of the first client, and display a third control on a graphical user interface of the first client, where at least position information of a plurality of second game roles is displayed in the thumbnail map, and the third control is configured to provide a plurality of role identification options to be selected corresponding to the plurality of second game roles; determining a target character identification option from a plurality of character identification options to be selected and a target second game character corresponding to the target character identification option in response to a third touch operation executed by a third control; and displaying a second position identifier on a graphical user interface of a second client corresponding to the target game character, wherein the second position identifier is used for identifying the relative azimuth between the position of the target game character and the target position.
Optionally, the control module 10 is further configured to control the first control to return from the marking position to the initial position to end the in-game information marking operation in response to a third touch operation performed by the first control, where the third touch operation and the first touch operation are continuously performed.
Optionally, the control module 10 is further configured to control the first control to automatically resume displaying in the initial position after synchronously displaying the marker position and the marker content.
Alternatively, as shown in fig. 11, the apparatus includes, in addition to all the modules shown in fig. 10, the in-game information marking apparatus described above further including: and a closing module 50, configured to respond to a fourth touch operation performed in the blank touch area outside the second control, and close the second control to end the in-game information marking operation.
Alternatively, as shown in fig. 11, the apparatus includes, in addition to all the modules shown in fig. 10, the in-game information marking apparatus described above further including: an adjustment module 60, configured to adjust a game scene range displayed by the graphical user interface of the first client in response to a fifth touch operation performed on the thumbnail map of the first client; the determining module 20 is further configured to determine a marker position from the adjusted game scene range.
Optionally, the second control is a window control, and the window control includes: and the touch control areas respectively correspond to different marking options, and the content of the marking options adopts system preset content or user-defined content.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Embodiments of the present invention also provide a non-volatile storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described nonvolatile storage medium may be configured to store a computer program for performing the steps of:
s1, responding to a first touch operation of moving a first control from an initial position to a marking position in a graphical user interface of a first client, and generating a second control in the graphical user interface of the first client, wherein the marking position corresponds to a target position of a virtual model to be marked in a game scene, the second control is used for providing a plurality of marking options to be selected corresponding to the target position, and the marking options to be selected contain corresponding marking contents;
S2, responding to a second touch operation executed on a second control, and determining a target marking option from a plurality of marking options to be selected;
and S3, displaying the mark content of the target mark option on a graphical user interface of the second client, and displaying the first position mark at the target position.
Alternatively, in the present embodiment, the above-described nonvolatile storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
S1, responding to a first touch operation of moving a first control from an initial position to a marking position in a graphical user interface of a first client, and generating a second control in the graphical user interface of the first client, wherein the marking position corresponds to a target position of a virtual model to be marked in a game scene, the second control is used for providing a plurality of marking options to be selected corresponding to the target position, and the marking options to be selected contain corresponding marking contents;
s2, responding to a second touch operation executed on a second control, and determining a target marking option from a plurality of marking options to be selected;
and S3, displaying the mark content of the target mark option on a graphical user interface of the second client, and displaying the first position mark at the target position.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (10)

1. An in-game information marking method, characterized in that the game comprises: a first game role controlled by a first client and a plurality of second game roles controlled by a plurality of second clients, wherein the first game role and the second game role belong to the same role camping, a graphical user interface is respectively provided by the first client and the second client, and content displayed by the graphical user interface comprises at least part of game scenes and first controls, and the in-game information marking method comprises the following steps:
responding to a sub-touch operation that a touch point of a first touch operation moves from an initial position to a mark position, and no position change occurs within a preset time, amplifying and displaying a thumbnail map of the first client, and displaying a third control on a graphical user interface of the first client, wherein the thumbnail map at least displays position information of a plurality of second game roles, the third control is used for providing a plurality of role identification options to be selected corresponding to the plurality of second game roles, and the mark position corresponds to a target position of a virtual model to be marked in a game scene;
Determining a target character identification option from the plurality of character identification options to be selected and a target second game character corresponding to the target character identification option in response to a third touch operation executed on the third control;
displaying a second position identifier and a first position identifier on a graphical user interface of a second client corresponding to the target second game role, and displaying a first position identifier on graphical user interfaces of second clients corresponding to the other second game roles, wherein the second position identifier is used for identifying the relative orientation between the position of the target game role and the target position, and the first position identifier is used for identifying the target position.
2. The method according to claim 1, characterized in that the method further comprises:
responding to a first touch operation of moving the first control from an initial position to a marking position in a graphical user interface of a first client, and generating a second control in the graphical user interface of the first client, wherein the second control is used for providing a plurality of marking options to be selected corresponding to the target position, and the marking options to be selected contain corresponding marking contents;
Determining a target marking option from the plurality of marking options to be selected in response to a second touch operation executed on the second control;
and displaying the marking content of the target marking options on the graphical user interfaces of the second clients, and displaying a first position identifier at the target position.
3. The in-game information marking method according to claim 1, characterized in that the in-game information marking method further comprises:
and responding to a fourth touch operation executed on the first control, and controlling the first control to return to the initial position from the marking position so as to finish the in-game information marking operation, wherein the fourth touch operation and the first touch operation are continuously executed.
4. The in-game information marking method according to claim 1, characterized in that the in-game information marking method further comprises:
and controlling the first control to automatically resume to be displayed at the initial position.
5. The in-game information marking method according to claim 2, characterized in that the in-game information marking method further comprises:
and closing the second control to finish the information marking operation in the game in response to a fifth touch operation executed on the blank touch area outside the second control.
6. The in-game information marking method according to claim 2, characterized in that the in-game information marking method further comprises:
responding to a sixth touch operation executed on the thumbnail map of the first client, and adjusting the game scene range displayed by the graphical user interface of the first client;
the marker position is determined from the adjusted game scene range.
7. The in-game information marking method according to claim 2, wherein the second control is a window control, the window control comprising: and the touch control areas respectively correspond to different marking options, and the content of the marking options adopts system preset content or user-defined content.
8. An in-game information marking apparatus, the game comprising: a first game role controlled by a first client and a plurality of second game roles controlled by a plurality of second clients, wherein the first game role and the second game role belong to the same role camping, a graphical user interface is respectively provided by the first client and the second client, the content displayed by the graphical user interface comprises at least part of game scenes and first controls, and the in-game information marking device comprises:
The processing module is used for responding to the sub-touch operation that the touch point of the first touch operation moves from an initial position to a mark position, and the position of the sub-touch operation does not change within a preset time, amplifying and displaying a thumbnail map of the first client, and displaying a third control on a graphical user interface of the first client, wherein the thumbnail map is at least used for displaying the position information of the plurality of second game roles, the third control is used for providing a plurality of role identification options to be selected corresponding to the plurality of second game roles, and the mark position corresponds to the target position of a virtual model to be marked in a game scene;
the device is further used for responding to a third touch operation executed on the third control, determining target character identification options from the plurality of character identification options to be selected and a target second game character corresponding to the target character identification options;
the device is further used for displaying a second position identifier and a first position identifier on a graphical user interface of a second client corresponding to the target second game role, and displaying a first position identifier on graphical user interfaces of second clients corresponding to other second game roles, wherein the second position identifier is used for identifying the relative orientation between the position of the target game role and the target position, and the first position identifier is used for identifying the target position.
9. A non-volatile storage medium, wherein a computer program is stored in the storage medium, wherein the computer program is arranged to perform the in-game information marking method of any one of claims 1 to 7 at run-time.
10. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the in-game information marking method as claimed in any of the claims 1 to 7.
CN202110795947.6A 2021-07-14 2021-07-14 In-game information marking method and device and electronic device Active CN113440848B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110795947.6A CN113440848B (en) 2021-07-14 2021-07-14 In-game information marking method and device and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110795947.6A CN113440848B (en) 2021-07-14 2021-07-14 In-game information marking method and device and electronic device

Publications (2)

Publication Number Publication Date
CN113440848A CN113440848A (en) 2021-09-28
CN113440848B true CN113440848B (en) 2024-03-01

Family

ID=77816139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110795947.6A Active CN113440848B (en) 2021-07-14 2021-07-14 In-game information marking method and device and electronic device

Country Status (1)

Country Link
CN (1) CN113440848B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327731B (en) * 2021-12-31 2023-11-14 北京字跳网络技术有限公司 Information display method, device, equipment and medium
CN116983613A (en) * 2022-07-26 2023-11-03 腾讯科技(上海)有限公司 Virtual object marking method, device, terminal and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111773705A (en) * 2020-08-06 2020-10-16 网易(杭州)网络有限公司 Interaction method and device in game scene
CN112090073A (en) * 2020-09-27 2020-12-18 网易(杭州)网络有限公司 Game display method and device
CN112827170A (en) * 2021-02-08 2021-05-25 网易(杭州)网络有限公司 Information processing method in game, electronic device and storage medium
CN112891929A (en) * 2021-02-08 2021-06-04 网易(杭州)网络有限公司 Game signal processing method and device
CN113101637A (en) * 2021-04-19 2021-07-13 网易(杭州)网络有限公司 Scene recording method, device, equipment and storage medium in game

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102557808B1 (en) * 2018-05-16 2023-07-20 주식회사 엔씨소프트 Gaming service system and method for sharing memo therein

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111773705A (en) * 2020-08-06 2020-10-16 网易(杭州)网络有限公司 Interaction method and device in game scene
CN112090073A (en) * 2020-09-27 2020-12-18 网易(杭州)网络有限公司 Game display method and device
CN112827170A (en) * 2021-02-08 2021-05-25 网易(杭州)网络有限公司 Information processing method in game, electronic device and storage medium
CN112891929A (en) * 2021-02-08 2021-06-04 网易(杭州)网络有限公司 Game signal processing method and device
CN113101637A (en) * 2021-04-19 2021-07-13 网易(杭州)网络有限公司 Scene recording method, device, equipment and storage medium in game

Also Published As

Publication number Publication date
CN113440848A (en) 2021-09-28

Similar Documents

Publication Publication Date Title
US20240168623A1 (en) System, method and graphical user interface for controlling a game
US10850196B2 (en) Terminal device
US11623142B2 (en) Data processing method and mobile terminal
US9652063B2 (en) Input direction determination system, terminal, server, network system, information storage medium, and input direction determination method
CN113440848B (en) In-game information marking method and device and electronic device
CN113908550A (en) Virtual character control method, nonvolatile storage medium, and electronic apparatus
CN113318428A (en) Game display control method, non-volatile storage medium, and electronic device
CN113262476B (en) Position adjusting method and device of operation control, terminal and storage medium
CN114653059A (en) Method and device for controlling virtual character in game and non-volatile storage medium
CN108543308B (en) Method and device for selecting virtual object in virtual scene
CN111389003B (en) Game role control method, device, equipment and computer readable storage medium
CN113590013B (en) Virtual resource processing method, nonvolatile storage medium and electronic device
CN114832371A (en) Method, device, storage medium and electronic device for controlling movement of virtual character
CN114504808A (en) Information processing method, information processing apparatus, storage medium, processor, and electronic apparatus
US20220080308A1 (en) System and method for precise positioning with touchscreen gestures
CN109605403B (en) Robot, robot operating system, robot control device, robot control method, and storage medium
CN113440835A (en) Control method and device of virtual unit, processor and electronic device
CN113926187A (en) Object control method and device in virtual scene and terminal equipment
CN113318430A (en) Virtual character posture adjusting method and device, processor and electronic device
CN113941143A (en) Virtual card processing method, nonvolatile storage medium and electronic device
WO2024078324A1 (en) Virtual object control method and apparatus, and storage medium and electronic device
WO2024051414A1 (en) Hot area adjusting method and apparatus, device, storage medium, and program product
CN114504812A (en) Virtual role control method and device
CN117942558A (en) Method and device for controlling display in game, electronic equipment and readable storage medium
CN117339212A (en) Method for controlling interaction of virtual game characters, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant