CN113559501B - Virtual unit selection method and device in game, storage medium and electronic equipment - Google Patents

Virtual unit selection method and device in game, storage medium and electronic equipment Download PDF

Info

Publication number
CN113559501B
CN113559501B CN202110863742.7A CN202110863742A CN113559501B CN 113559501 B CN113559501 B CN 113559501B CN 202110863742 A CN202110863742 A CN 202110863742A CN 113559501 B CN113559501 B CN 113559501B
Authority
CN
China
Prior art keywords
user interface
graphical user
selection
game
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110863742.7A
Other languages
Chinese (zh)
Other versions
CN113559501A (en
Inventor
桑田
陈梦川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110863742.7A priority Critical patent/CN113559501B/en
Publication of CN113559501A publication Critical patent/CN113559501A/en
Application granted granted Critical
Publication of CN113559501B publication Critical patent/CN113559501B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6045Methods for processing data by generating or executing the game program for mapping control signals received from the input arrangement into game commands

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to the technical field of man-machine interaction, and provides a virtual unit selection method and device in a game, a computer readable storage medium and electronic equipment. The method comprises the following steps: responding to target triggering operation acted on the graphical user interface, displaying a frame selection mark positioned at a preset position of the graphical user interface and determining that the frame selection mark is mapped to a first game place in a game scene; responding to the sliding operation of the touch medium on the graphical user interface, and controlling the frame selection mark to move in the game scene; determining, in response to the end of the sliding operation, that the box selection marker at the preset position is mapped to a second game location in the game scene, to generate a selection box in the graphical interface based on the first game location and the second game location; and selecting the target virtual unit according to the generated selection frame. The method and the device can directly generate the selection frame based on the movement control of the frame selection mark at the preset position, so that the selection efficiency of the virtual unit can be improved.

Description

Virtual unit selection method and device in game, storage medium and electronic equipment
Technical Field
The disclosure relates to the technical field of man-machine interaction, and in particular relates to a virtual unit selection method in a game, a virtual unit selection device in the game, a computer readable storage medium and electronic equipment.
Background
Some required virtual units are selected from a plurality of virtual units in the game to control the selected virtual units to perform some other game operations, which is one of conventional game operations of an RTS (Real-Time Strategy) type game.
In the related art, a selection frame is formed by double-touch to select a game unit within the selection frame. However, this method requires the player to perform double-finger clicking and operation, and the operation steps are complex and inefficient, and when the player performs double-finger operation in the screen, the line of sight of the player is blocked, which affects the game experience of the player.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure is directed to a method and an apparatus for selecting virtual units in a game, a computer readable storage medium, and an electronic device, and at least to some extent, to overcome the problems that virtual units in a game are selected with low efficiency, and a selection process may block a player's line of sight, resulting in poor player game experience.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a virtual unit selection method in a game, in which a game screen of the game is displayed through a graphical user interface, the game screen including a part or all of a game scene and a plurality of virtual units located in the game scene, the method comprising:
responding to target triggering operation acted on the graphical user interface, displaying a frame selection mark positioned at a preset position of the graphical user interface and determining that the frame selection mark is mapped to a first position game place in a game scene;
controlling the frame selection mark to move in the game scene in response to the sliding operation of the touch medium on the graphical user interface;
determining, in response to the end of the sliding operation, a second game location in the game scene to which the box selection marker at the preset position is mapped, to generate a selection box in the graphical interface based on the first game location and the second game location;
and selecting a target virtual unit from the plurality of virtual units according to the generated selection frame and the positions of the plurality of virtual units.
In an exemplary embodiment of the present disclosure, based on the foregoing solution, a game screen obtained by photographing a part or all of a game scene and a plurality of virtual units located in the game scene by a virtual camera is displayed through a graphical user interface;
the controlling the frame selection mark to move in the game scene in response to the sliding operation of the touch medium on the graphical user interface comprises the following steps:
responding to the sliding operation of the touch medium on the graphical user interface, and adjusting the pose of the virtual camera so as to change the game picture displayed in the graphical user interface;
and changing the game place of the frame selection mark mapped in the game scene at the preset position based on the change of the game picture displayed in the graphical user interface so as to control the frame selection mark to move in the game scene.
In an exemplary embodiment of the disclosure, based on the foregoing solution, a frame selection mark in an initial display state is displayed at a preset position of the graphical user interface;
the method for displaying the frame selection mark at the preset position of the graphical user interface comprises the following steps of:
And in response to a target triggering operation acting on the graphical user interface, modifying the state of the box selection marker from the initial display state to a target display state.
In an exemplary embodiment of the present disclosure, based on the foregoing aspect, the displaying the box selection mark at the preset position of the graphical user interface in response to the target trigger operation acting on the graphical user interface includes:
responding to an initial triggering operation acted on a graphical user interface, displaying a frame selection mark at a preset position of the graphical user interface, and configuring the state of the frame selection mark as an initial display state;
and in response to a target triggering operation acting on the graphical user interface, modifying the state of the box selection marker from the initial display state to a target display state.
In an exemplary embodiment of the present disclosure, based on the foregoing solution, the controlling the movement of the box selection marker in the game scene in response to a sliding operation of the touch medium on the graphical user interface includes:
and when the frame selection mark is in the target display state, responding to the sliding operation of the touch medium on the graphical user interface, and controlling the frame selection mark to move in the game scene.
In an exemplary embodiment of the present disclosure, based on the foregoing aspect, the displaying the box selection mark at the preset position of the graphical user interface in response to the target trigger operation acting on the graphical user interface includes:
and responding to target triggering operation acted on a preset control in the graphical user interface, and displaying a frame selection mark positioned at a preset position of the graphical user interface.
In an exemplary embodiment of the present disclosure, based on the foregoing, the generating a selection box based on the first game location and the second game location includes:
generating a selection frame based on a first position of the first game place mapping in the graphical user interface and a second position of the second game place mapping in the graphical user interface, wherein the second position and the preset position are overlapped.
In an exemplary embodiment of the present disclosure, based on the foregoing, the generating a selection box based on the first location of the first game place map in the graphical user interface and the second location of the second game place map in the graphical user interface includes:
a rectangular selection box is generated in the graphical user interface by taking the shortest connecting line between the first position and the second position as a diagonal line.
In an exemplary embodiment of the present disclosure, based on the foregoing, the generating a selection box based on the first location of the first game place map in the graphical user interface and the second location of the second game place map in the graphical user interface includes:
and generating a circular selection frame in the graphical user interface by taking the preset position as a circle center and the shortest connecting line between the first position and the second position as a radius.
In an exemplary embodiment of the present disclosure, based on the foregoing solution, the selecting, according to the generated selection frame and the positions of the plurality of virtual units, a target virtual unit from the plurality of virtual units includes:
and when the touch medium and the graphical user interface are detected to change from a contact state to a non-contact state, selecting a target virtual unit from a plurality of virtual units according to the generated selection frame and the positions of the plurality of virtual units.
In an exemplary embodiment of the present disclosure, based on the foregoing solution, the selecting, according to the generated selection frame and the positions of the plurality of virtual units, a target virtual unit from the plurality of virtual units includes:
And selecting a target virtual unit from a plurality of virtual units according to the position overlapping degree between the current display position of the virtual unit in the graphical user interface and the currently generated selection frame.
In an exemplary embodiment of the present disclosure, based on the foregoing solution, the graphical user interface further includes a sliding control area, where the sliding control area is determined according to mapping positions of the plurality of virtual units in real time in the game scene into mapping positions in the graphical user interface;
the responding to the sliding operation of the touch medium on the graphical user interface comprises the following steps: and responding to the sliding operation of the touch medium in the sliding control area.
According to a second aspect of the present disclosure, a game screen obtained by displaying, through a graphical user interface, a part or all of a game scene captured by a virtual camera and a plurality of virtual units located in the game scene, includes:
responding to the sliding operation of the touch medium on the graphical user interface, and adjusting the pose of the virtual camera to update the game picture;
responding to target triggering operation acted on the graphical user interface, entering a frame selection state and displaying a frame selection mark at a preset position of the graphical user interface, wherein the graphical user interface displays a game picture obtained by the virtual camera shooting the game scene and the virtual unit in the current pose in the frame selection state;
In a frame selection state, responding to the sliding operation of a touch medium on a graphical user interface, and generating a selection frame in the graphical user interface based on the sliding operation by taking a preset position of a frame selection mark as a starting point or a center;
and selecting a target virtual unit from the plurality of virtual units according to the selection frame and the positions of the plurality of virtual units.
In an exemplary embodiment of the disclosure, based on the foregoing solution, a frame selection mark in an initial display state is displayed at a preset position of the graphical user interface;
the responding to the target triggering operation acted on the graphical user interface, entering a box selection state and displaying a box selection mark at a preset position of the graphical user interface comprises the following steps:
and responding to a target triggering operation acted on the graphical user interface, entering a box selection state and modifying the state of the box selection mark from the initial display state to a target display state.
In an exemplary embodiment of the present disclosure, based on the foregoing, the responding to the target trigger operation acting on the graphical user interface, entering a box selection state and displaying a box selection mark at a preset position of the graphical user interface, includes:
Responding to an initial triggering operation acted on a graphical user interface, displaying a frame selection mark at a preset position of the graphical user interface, and configuring the state of the frame selection mark as an initial display state;
and responding to a target triggering operation acted on the graphical user interface, entering a box selection state and modifying the state of the box selection mark from the initial display state to a target display state.
In an exemplary embodiment of the present disclosure, based on the foregoing, the responding to the target trigger operation acting on the graphical user interface, entering a box selection state and displaying a box selection mark at a preset position of the graphical user interface, includes:
and responding to target triggering operation acted on a preset control in the graphical user interface, entering a box selection state and displaying a box selection mark at a preset position of the graphical user interface.
In an exemplary embodiment of the disclosure, based on the foregoing aspect, the generating a selection box in the graphical user interface based on the sliding operation with the box selection mark as a starting point or a center includes:
generating a selection box in the graphical user interface based on the sliding operation by performing the following process when starting from a preset position where the box selection mark is located:
Correspondingly determining the generation direction of the selection frame in the horizontal direction and/or the vertical direction according to the movement direction of the sliding operation map in the horizontal direction and/or the vertical direction;
correspondingly determining the display size of the boundary of the selection frame in the horizontal direction and/or the vertical direction according to the moving distance of the sliding operation map in the horizontal direction and/or the vertical direction;
generating the selection frame in a graphical user interface based on the generation direction and the display size by taking the preset position as a starting point;
generating a selection box in the graphical user interface based on the sliding operation by performing the following process while centering on a preset position where the box selection mark is located:
determining the display size of the selection frame in the graphical user interface according to the sliding distance of the sliding operation;
and generating a selection frame in a graphical user interface based on the display size by taking the preset position as a center.
In an exemplary embodiment of the present disclosure, based on the foregoing solution, selecting a target virtual unit from a plurality of virtual units according to the selection frame and the positions of the plurality of virtual units, includes:
When the touch medium and the graphical user interface are detected to change from a contact state to a non-contact state, selecting a target virtual unit from a plurality of virtual units according to the currently generated selection frame and the positions of the plurality of virtual units.
In an exemplary embodiment of the present disclosure, based on the foregoing solution, selecting a target virtual unit from a plurality of virtual units according to the selection frame and the positions of the plurality of virtual units, includes:
and selecting a target virtual unit from a plurality of virtual units according to the display position of the virtual unit in the graphical user interface and the generated position overlapping degree between the selection frames.
In an exemplary embodiment of the present disclosure, based on the foregoing solution, the graphical user interface further includes a sliding control area, where the sliding control area is determined according to mapping positions of the plurality of virtual units in real time in the game scene into mapping positions in the graphical user interface;
the responding to the sliding operation of the touch medium on the graphical user interface comprises the following steps: and responding to the sliding operation of the touch medium in the sliding control area.
According to a third aspect of the present disclosure, there is provided a virtual unit selection apparatus in a game, which displays a game screen of the game through a graphical user interface, the game screen including a part or all of a game scene and a plurality of virtual units located in the game scene, the apparatus comprising:
a box selection mark display module configured to display a box selection mark located at a preset position of a graphical user interface and determine that the box selection mark is mapped to a first position game place in a game scene in response to a target trigger operation acting on the graphical user interface;
a frame selection mark moving module configured to control the frame selection mark to move in the game scene in response to a sliding operation of the touch medium on a graphical user interface;
a selection frame generation module configured to determine, in response to an end of the sliding operation, a second game place in the game scene to which a frame selection mark at the preset position is mapped, to generate a selection frame in the graphical interface based on the first game place and the second game place;
and a virtual unit selection module configured to select a target virtual unit from a plurality of virtual units according to the generated selection frame and the positions of the plurality of virtual units.
According to a fourth aspect of the present disclosure, there is provided a virtual unit selection device in a game, for displaying, through a graphical user interface, a game screen obtained by capturing a part or all of a game scene by a virtual camera and a plurality of virtual units located in the game scene, including:
the game picture updating module is configured to respond to the sliding operation of the touch medium on the graphical user interface and adjust the pose of the virtual camera so as to update the game picture;
a frame selection mark display module configured to enter a frame selection state in response to a target trigger operation acting on the graphical user interface and display a frame selection mark at a preset position of the graphical user interface, wherein the graphical user interface displays a game picture obtained by the virtual camera shooting the game scene and the virtual unit in the current pose in the frame selection state;
the selection frame generation module is configured to respond to the sliding operation of the touch medium on the graphical user interface in a frame selection state, and generates a selection frame in the graphical user interface based on the sliding operation by taking the preset position of the frame selection mark as a starting point or a center;
And a virtual unit selection module configured to select a target virtual unit from a plurality of virtual units according to the selection frame and the positions of the plurality of virtual units.
According to a fifth aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the virtual unit selection method in a game as described in the first and/or second aspects of the above embodiments.
According to a fourth aspect of embodiments of the present disclosure, there is provided an electronic device, comprising: a processor; and storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of virtual unit selection in a game as described in the first and/or second aspects of the embodiments above.
As can be seen from the above technical solutions, the method for selecting a virtual unit in a game, the device for selecting a virtual unit in a game, and the computer-readable storage medium and the electronic device for implementing the method for selecting a virtual unit in a game according to the exemplary embodiments of the present disclosure have at least the following advantages and positive effects:
In some embodiments of the present disclosure, first, in response to a target trigger operation applied to a graphical user interface, a box selection mark located at a preset position of the graphical user interface is displayed and mapped to a first game location in a game scene, then, in response to a sliding operation of a touch medium on the graphical user interface, the box selection mark is controlled to move in the game scene, and in response to the end of the sliding operation, a box selection mark located at the preset position is mapped to a second game location in the game scene, so that a selection frame can be generated in the graphical user interface based on the first game location and the second game location, and then, virtual units in a game can be selected according to the generated selection frame. Compared with the related art, on one hand, the method and the device can directly generate the selection frame through the movement of the frame selection mark in the game scene so as to select the virtual units, thereby improving the selection efficiency of the virtual units; on the other hand, the selection frame can be directly generated by sliding operation in the graphical user interface, and compared with a mode of forming the selection frame by double-point touch, the method reduces shielding of the visual line of a player in the virtual unit selection process, and improves the experience of the player and the accuracy of game operation.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
FIG. 1A is a graphical user interface diagram illustrating prior art virtual unit selection in an embodiment of the present disclosure;
FIG. 1B illustrates another graphical user interface diagram of prior art virtual unit selection in an exemplary embodiment of the present disclosure;
FIG. 1C illustrates yet another graphical user interface diagram of prior art virtual unit selection in an exemplary embodiment of the present disclosure;
FIG. 2 is a flow chart illustrating a method of virtual unit selection in a game in an exemplary embodiment of the present disclosure;
FIG. 3 illustrates a graphical user interface diagram in an exemplary embodiment of the present disclosure;
FIG. 4 illustrates a flow diagram of a method of configuring a status of a box selection marker to a target display status in an exemplary embodiment of the present disclosure;
FIG. 5 illustrates a flow diagram of a method for control box selection markers to move in a game scene in an exemplary embodiment of the present disclosure;
FIG. 6 is a flow chart of a method for generating a selection box based on a sliding operation of a touch medium in an exemplary embodiment of the disclosure;
FIG. 7 is a flow chart of another method for generating a selection box based on a sliding operation of a touch medium in an exemplary embodiment of the disclosure;
FIG. 8 illustrates a flow diagram of another method of virtual unit selection in a game in an exemplary embodiment of the present disclosure;
FIG. 9 is a flowchart illustrating a method for generating a selection box using a preset position as a starting point in an exemplary embodiment of the present disclosure;
FIG. 10 illustrates a graphical user interface of a selection box generated starting from a preset location in an exemplary embodiment of the present disclosure;
FIG. 11 is a flow chart illustrating a method of generating a selection box centered on a preset location in an exemplary embodiment of the present disclosure;
FIG. 12 illustrates yet another graphical user interface diagram in an exemplary embodiment of the present disclosure;
FIG. 13 illustrates yet another graphical user interface diagram in an exemplary embodiment of the present disclosure;
FIG. 14 is a schematic diagram showing the configuration of a virtual unit selection apparatus in a game according to an exemplary embodiment of the present disclosure;
fig. 15 is a schematic view showing the structure of a virtual unit selection apparatus in another game in an exemplary embodiment of the present disclosure;
FIG. 16 illustrates a schematic diagram of a computer storage medium in an exemplary embodiment of the disclosure;
fig. 17 shows a schematic structural diagram of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
The terms "a," "an," "the," and "said" are used in this specification to denote the presence of one or more elements/components/etc.; the terms "comprising" and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. in addition to the listed elements/components/etc.; the terms "first" and "second" and the like are used merely as labels, and are not intended to limit the number of their objects.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities.
Some required virtual units are selected from a plurality of virtual units in the game to control the selected virtual units to perform some other game operations, which is one of conventional game operations of an RTS (Real-Time Strategy) type game.
For the PC end (Personal Computer), the selection of virtual units is easy, and a game player can select single virtual units or batch selection of virtual units in a mode of clicking a frame through a mouse. But for touch terminals with smaller display screens, such as mobile phones, tablet computers, wearable electronic devices, etc., this approach is not suitable.
In the related art, virtual units can be selected at the mobile phone end in a mode of selecting a frame in a double-point touch mode. As shown in fig. 1A, a selection frame may be formed in an area between the double touches according to an operation gesture of the player to select a virtual unit.
However, this method requires the player to quickly click and operate the finger, which has complicated operation steps, low efficiency and high false touch rate. Further, whether the virtual units in the clusters at the center of the screen or the virtual units in the clusters at the sides of the screen are selected, as shown in fig. 1B and fig. 1C, during the operation, fingers can block the central line of sight of the player who is in antagonism, so that the experience of the player and the accuracy of the game operation are affected.
In an embodiment of the present disclosure, a virtual unit selection method in a game is provided, which overcomes at least some of the drawbacks existing in the related art described above.
Fig. 2 is a flow chart illustrating a method for selecting virtual units in a game according to an exemplary embodiment of the present disclosure, where the method for selecting virtual units in a game provided in the present embodiment displays a game screen of the game through a graphical user interface, where the game screen includes a part or all of a game scene and a plurality of virtual units located in the game scene. Referring to fig. 2, the method includes:
Step S210, in response to a target triggering operation acting on the graphical user interface, displaying a frame selection mark positioned at a preset position of the graphical user interface and determining that the frame selection mark is mapped to a first position game place in a game scene;
step S220, responding to the sliding operation of the touch medium on the graphical user interface, and controlling the frame selection mark to move in the game scene;
step S230, in response to the end of the sliding operation, determining that the box selection mark at the preset position is mapped to a second game place in the game scene, so as to generate a selection box in the graphical interface based on the first game place and the second game place;
and step S240, selecting a target virtual unit from the plurality of virtual units according to the generated selection frame and the positions of the plurality of virtual units.
In the technical solution provided in the embodiment shown in fig. 2, first, in response to a target trigger operation acting on a graphical user interface, a box selection mark located at a preset position of the graphical user interface is displayed and it is determined that the box selection mark is mapped to a first game location in a game scene, then, in response to a sliding operation of a touch medium on the graphical user interface, the box selection mark is controlled to move in the game scene, in response to the end of the sliding operation, it is determined that the box selection mark at the preset position is mapped to a second game location in the game scene, so that a selection frame can be generated in the graphical user interface based on the first game location and the second game location, and then virtual units in the game are selected according to the generated selection frame. Compared with the related art, on one hand, the method and the device can directly generate the selection frame through the movement of the frame selection mark in the game scene so as to select the virtual units, thereby improving the selection efficiency of the virtual units; on the other hand, the selection frame can be directly generated by sliding operation in the graphical user interface, and compared with a mode of forming the selection frame by double-point touch, the method reduces shielding of the visual line of a player in the virtual unit selection process, and improves the experience of the player and the accuracy of game operation.
The following describes in detail the specific implementation of each step in the embodiment shown in fig. 2:
in step S210, in response to a target trigger operation acting on the graphical user interface, a box selection marker located at a preset position of the graphical user interface is displayed and it is determined that the box selection marker is mapped to a first position game place in the game scene.
The game in this embodiment is a touch screen type mobile terminal game, and may be a mobile phone game, a tablet game, a wearable electronic device game, or the like. Touch screen type mobile terminal games can receive and identify various game operations of a game user at a terminal interface.
In an exemplary embodiment, the target triggering operation may include long pressing, clicking, sliding in the gui with various movement tracks by the touch medium, and so on, so long as the operation of triggering the frame selection mark to change the state so that the game of the current client enters the virtual unit frame selection state can be used as the target triggering operation, which is within the protection scope of the present disclosure.
In another exemplary embodiment, step S210 may include: and responding to target triggering operation acted on a preset control in the graphical user interface, and displaying a frame selection mark positioned at a preset position of the graphical user interface.
In other words, the target trigger operation may include an operation of performing a state change by a preset control trigger box selection flag to bring the game of the current client into a virtual unit box selection state. For example, a preset control may be preconfigured in the gui, and when the frame selection mark located at a preset position in the gui is in the initial display state, the triggering operation of the preset control may enter the virtual unit frame selection state. The triggering operation of the preset control can include operation of clicking or double clicking the preset control, operation of pressing the preset control for a long time, and operation of triggering the preset control through various movement tracks.
In an exemplary embodiment, a box tab in an initial display state is displayed at a preset position of the graphical user interface. That is, as soon as the game starts, a box mark in an initial display state is directly displayed in the game interface. Based on this, a specific embodiment of step S210 may be to modify the state of the box selection flag from the initial display state to the target display state in response to a target trigger operation acting on the graphical user interface. For example, in response to a target trigger operation for a preset control in a graphical user interface, modifying the state of the box selection marker from the initial display state to a target display state.
Wherein the initial display state includes a display state when the box selection marker is not activated, such as when the box selection marker is displayed in gray. The target display state may include a display state when the box selection marker is activated, such as when the box selection marker is displayed in red, or other colors having a high degree of distinction from the game background in the game screen and different from the initial display state. Of course, the initial display state and the target display state may be distinguished by other display modes besides color, which is not particularly limited in the present exemplary embodiment.
In an exemplary embodiment, the shape of the box selection marker may be a point, may be understood as a centroid, or other marker that can identify where in the graphical user interface the selection box was generated from the beginning or center.
It should be noted that, the frame selection mark may be set at any position in the image user interface for displaying the game screen, such as the center of the graphical user interface, and once the placement position of the frame selection mark is set, it will not change with any operation of the user in the process of selecting the virtual unit, i.e. its display position in the graphical user interface is fixed.
Taking an example in which the target triggering operation may include an operation of triggering a frame selection mark through a preset control to change a state so that the game of the current client enters a virtual unit frame selection state, for example, when the user enters the game, a frame selection mark is displayed at a preset position of the game interface, and the frame selection mark is in an initial display state that is not activated at this time, such as a frame selection mark 31 in fig. 3. Meanwhile, a preset control capable of activating the box selection mark is preconfigured in the graphical user interface, such as a box selection control in fig. 3. When the user performs a target trigger operation for the "box selection" control, the box selection tab 31 in fig. 3 will be activated, and when the box selection tab is activated, the box selection tab changes from the initial display state to the target display state. At this time, the preset control may also change from the first display state to the second display state, for example, from gray to red, so as to remind the user that the user can slide in the sliding control area currently to select the virtual unit.
In another alternative embodiment, the box selection marker is not displayed directly in the graphical user interface at the beginning of the game. Based on this, another embodiment of step S210 will be described below with reference to fig. 4.
Fig. 4 shows a flow diagram of a method of configuring a status of a box selection marker to a target display status in an exemplary embodiment of the present disclosure. Referring to fig. 4, the method may include steps S410 to S420. Wherein:
in step S410, in response to an initial trigger operation applied to a graphical user interface, a box selection marker is displayed at a preset position of the graphical user interface, and a state of the box selection marker is configured as an initial display state.
In an exemplary embodiment, the initial triggering operation may include a triggering operation for the graphical user interface or for a preset control in the graphical user interface when the box selection marker is not displayed in the graphical user interface. The operation may include a single click operation, a double click operation, a long press operation, etc., or may be an operation of triggering a preset control by various movement tracks, which is not particularly limited in the present exemplary embodiment.
For example, when the frame selection mark is not displayed in the graphical user interface, in response to an initial triggering operation of the touch medium on the graphical user interface, a preconfigured frame selection mark may be displayed at a preset position of the graphical user interface, and a display state of the frame selection mark may be configured to be an initial display state, such as the gray display state described above.
After the box selection mark is displayed in the graphical user interface, in response to a target trigger operation acting on the graphical user interface, the state of the box selection mark is modified from the initial display state to a target display state in step S420.
For example, when a user needs to make a virtual unit selection, the graphical user interface or a preset control in the graphical user interface may be triggered to display a preconfigured frame selection marker at a preset position of the graphical user interface, where the frame selection marker is not activated although displayed, i.e. is in an initial display state; when the user triggers the graphical user interface again or a preset control in the graphical user interface, the box selection marker is activated, i.e. changed from the initial display state to the target display state. When the frame selection mark is displayed as a target display state, the selection frame can be generated according to the sliding operation of the touch medium at the moment so as to select the virtual unit.
It should be noted that, the initial triggering operation and the target triggering operation may be the same operation type, for example, both are operations of clicking a preset control, but the initial triggering operation is a clicking operation performed on the graphical user interface or the preset control in the graphical user interface when the frame selection mark is not displayed in the graphical user interface, and the target triggering operation is a clicking operation performed on the graphical user interface or the preset control in the graphical user interface when a frame selection mark in an initial display state is displayed in the graphical user interface. The operation types of the initial triggering operation and the target triggering operation may be different, for example, the initial triggering operation is an operation of clicking a certain preset area in the graphical user interface or a preset control in the graphical user interface, the target triggering operation is an operation of performing long press in a certain preset area in the graphical user interface or an operation of performing long press on the preset control, and the exemplary embodiment is not limited in particular.
Next, with continued reference to fig. 2, in step S220, the box selection marker is controlled to move in the game scene in response to a sliding operation of the touch medium on the graphical user interface.
In an exemplary embodiment, a sliding control area is included in the graphical user interface. The sliding operation of the touch medium on the graphical user interface may include a sliding operation of the touch medium on a sliding control area in the graphical user interface. Based on this, in a specific embodiment of step S220, the movement of the box selection marker in the game scene may be controlled in response to the sliding operation of the touch medium in the sliding control area.
In an exemplary embodiment, the slide control area includes an area determined from a mapping location of the plurality of virtual units in the game scene to a mapping location in the graphical user interface. The sliding control area may be understood as any area of the graphical user interface except for the mapping position of the virtual unit in the game scene to the mapping position in the graphical user interface, that is, an area where the positions of any virtual unit displayed in the graphical user interface are not coincident.
Therefore, the user can conduct sliding operation on the edge position far away from the display position of the virtual unit in the graphical user interface to select the virtual unit, so that shielding of the game picture in the operation process of selecting the virtual unit is reduced, the user can select the virtual unit in a larger visual field range, and accuracy of selecting the virtual unit by the user is improved.
In another embodiment, the sliding control area may be an area outside a preset range of the box selection mark, for example: a circular area or a rectangular area is formed centering on the position of the frame selection mark, and an area other than the circular area or the rectangular area can be used as a slide control area. In this way, the shielding of the game screen by the operation process of virtual unit selection can be reduced.
In yet another exemplary embodiment, the slide control area may be an area in the graphical user interface that obscures a small area from the user's line of sight, such as the upper left corner, upper right corner, lower left corner, lower right corner, etc. For example, a certain area in the graphical user interface may be configured in advance as a slide control area capable of triggering generation of a selection frame, such as the slide control area 32 in the graphical user interface shown in fig. 3. Wherein the sliding control area may be a part or all of the area in the graphical user interface, which is not particularly limited in the present exemplary embodiment. When the sliding control area is the whole area in the graphical user interface, the user usually selects to slide in the edge area farther from the virtual unit cluster in order to avoid blocking the view range of the game screen for himself/herself by the sliding operation during the selection frame generation process.
Next, step S220 will be further described with reference to fig. 5. Fig. 5 shows a flow diagram of a method of controlling movement of a box selection marker in a game scene in an exemplary embodiment of the present disclosure. Referring to fig. 5, the method may include steps S510 to S520. Wherein,
in step S510, in response to the sliding operation of the touch medium on the gui, the pose of the virtual camera is adjusted, so that the game screen displayed on the gui changes.
In the present disclosure, a game screen obtained by photographing a part or all of a game scene and a plurality of virtual units located in the game scene by a virtual camera is displayed through a graphical user interface.
For example, the sliding operation of the touch medium in the graphical user interface and the pose adjustment of the virtual camera may be associated in advance, and when the touch medium slides in the graphical user interface, the pose of the virtual camera capturing the game scene may be correspondingly adjusted based on the sliding operation of the touch medium, so as to change the capturing view angle of the virtual camera, thereby changing the game picture displayed in the graphical user interface.
Next, in step S520, a game place where the box-selection mark at the preset position is mapped in the game scene is changed based on the change of the game screen displayed in the graphic user interface to control the box-selection mark to move in the game scene.
For example, when a game screen displayed in the graphical user interface changes, a game place mapped in the game scene by the frame selection mark at a preset position of the graphical user interface also changes, so that the frame selection mark can be indirectly controlled to move in the game scene through the pose adjustment of the virtual camera. Through the above-described steps S510 to S520, the frame selection mark located at the preset position of the graphical user interface may be controlled to move in the game scene by changing the pose of the virtual camera based on the sliding operation of the touch medium.
It should be noted that, in the process of sliding the touch medium to adjust the pose of the virtual camera, the frame selection mark is always at the preset position of the graphical user interface, and the position of the frame selection mark is not changed along with the sliding of the touch medium. Since the pose of the virtual camera changes the game screen displayed in the graphical user interface, although the frame selection mark is always at the preset position of the graphical user interface, the game place mapped in the graphical user interface by the frame selection mark changes along with the change of the game screen displayed in the graphical user interface, so that the frame selection mark is controlled to move in the game scene indirectly.
In an exemplary embodiment, the specific embodiment of step S220 may include: and when the frame selection mark is in the target display state, responding to the sliding operation of the touch medium on the graphical user interface, and controlling the frame selection mark to move in the game scene.
In other words, when the box selection mark is in the initial display state, the gesture of the virtual camera can be adjusted based on sliding of the touch medium in the graphical user interface, so as to adjust the game screen displayed in the graphical user interface, thereby changing the viewing angle of the player to the game scene, but the box selection mark is not controlled to move in the game scene, i.e. the selection box is not generated. Only when the frame selection mark is in the target display state, the selection frame can be generated to select the virtual unit based on the sliding operation of the touch medium in the graphical user interface while adjusting the pose of the virtual camera. Thus, the virtual unit selection method in the present disclosure does not affect the normal adjustment process of the virtual camera in the game at all, which is equivalent to multiplexing the adjustment operation of the virtual camera in the game to realize the selection of the virtual unit.
With continued reference to fig. 2, in step S230, responsive to the end of the sliding operation, a second game location in the game scene is determined to which the box selection marker at the preset position is mapped to generate a selection box in the graphical interface based on the first game location and the second game location.
The end of the sliding operation may include a change of the touch medium and the graphical user interface from a contact state to a non-contact state, and may further include an operation of stopping the sliding of the touch medium, which is not particularly limited in this exemplary embodiment.
Exemplary, the specific embodiment of step S230 may include: generating a selection frame based on a first position of the first game place mapping in the graphical user interface and a second position of the second game place mapping in the graphical user interface, wherein the second position and the preset position are overlapped.
As described above, since the display position of the box selection mark in the gui does not change during the sliding of the touch medium, after the sliding operation is finished, the second position of the box selection mark mapped in the second game place in the game scene mapped in the gui is the preset position where the box selection mark is located, that is, the second position coincides with the preset position.
In one exemplary embodiment, generating a selection box based on a first location of the first game location mapping in the graphical user interface and a second location of the second game location mapping in the graphical user interface includes: a rectangular selection box is generated in the graphical user interface by taking the shortest connecting line between the first position and the second position as a diagonal line.
For example, since the second position and the preset position are coincident, a rectangular selection frame may be generated by taking the shortest line between the preset position and the first position as a diagonal line.
In another exemplary embodiment, generating a selection box based on a first location of the first game location mapping in the graphical user interface and a second location of the second game location mapping in the graphical user interface includes: and generating a circular selection frame in the graphical user interface by taking the preset position as a circle center and the shortest connecting line between the first position and the second position as a radius.
In the present disclosure, the selection frame may be generated and displayed in real time during the sliding of the touch medium in the graphical user interface. Specifically, a selection frame may be generated and displayed in real time based on a shortest connection line between a first location of the first game location currently mapped in the graphical user interface and a preset location where the frame selection marker is located.
For example, when the touch medium slides in the gui, the pose of the virtual camera changes, and the pose of the virtual camera changes to change the game screen displayed in the gui in real time. Therefore, a selection frame may be generated in the graphical user interface in real time based on the first location of the first game location currently mapped in the graphical user interface and the preset location of the frame selection marker, with the preset location of the frame selection marker as a starting point or center.
Specifically, when the preset position is taken as a starting point, a selection frame can be generated in the graphical user interface by executing the steps in fig. 6. When centering on the preset position, a selection box may be generated in the graphical user interface by performing the steps in fig. 7.
Next, step S220 and step S230 will be further described with reference to fig. 6 and 7.
Fig. 6 is a flowchart illustrating a method for generating a selection box using a preset position as a starting point in an exemplary embodiment of the present disclosure. Referring to fig. 6, the method may include steps S610 to S630.
Wherein:
in step S610, the pose of the virtual camera in the game is adjusted based on the sliding operation.
For example, the pose of the virtual camera in the game scene may be adjusted according to the sliding direction and the sliding distance of the sliding operation. Wherein the pose of the virtual camera includes a position and an angle.
For example, according to a proportional relationship between a preset sliding distance of the sliding operation and a sliding distance of the virtual camera, a moving distance and/or a rotating angle of the virtual camera may be determined based on the sliding distance of the sliding operation, and a moving direction and/or a rotating direction of the virtual camera may be determined based on the sliding direction of the sliding operation, e.g., the moving direction of the virtual camera in the virtual scene and the sliding direction of the sliding operation are identical or opposite, and then the virtual camera in the game may be moved or rotated based on the sliding direction and the sliding distance of the sliding operation.
Next, in step S620, a first position of the first game location currently mapped in the graphical user interface is determined according to the pose change of the virtual camera.
Illustratively, when the pose of the virtual camera in the game changes, then the game frame shot by the virtual camera changes, i.e. the game frame displayed in the graphical user interface changes. Based on the change of the game screen displayed in the graphical user interface, the first position of the first game place mapped in the graphical user interface also changes along with the movement or rotation of the virtual camera, and similarly, the display position of each virtual unit in the game in the graphical user interface also changes along with the movement or rotation of the virtual camera.
Thus, in step S620, the first position of the first game location currently mapped in the graphical user interface may be determined in real time according to the pose change of the virtual camera.
After determining that the first game location is currently mapped at the first position in the gui, in step S630, a rectangular selection frame is generated in the gui, with the preset position as a starting point, and the shortest line between the first position and the preset position as a diagonal line.
In an exemplary embodiment, the selection box generated in step S630 may be rectangular, such as square or rectangle.
As previously described, in the present disclosure, the display position of the box selection marker in the graphical user interface is fixed, i.e., the box selection marker is fixedly displayed at a preset position of the graphical user interface. After determining that the first game place is currently mapped at the first display position in the graphical user interface, a rectangular selection frame can be generated in the graphical user interface in real time by taking the shortest connecting line between the first position and the preset position as a diagonal line.
Next, fig. 7 is a flowchart illustrating another method for generating a selection frame based on a sliding operation of a touch medium in an exemplary embodiment of the present disclosure. Referring to fig. 7, the method may include steps S710 to S740. Wherein:
In step S710, the pose of the virtual camera in the game is adjusted based on the sliding operation.
The specific embodiment of step S710 is identical to the specific embodiment of step S610 described above, and will not be described here again.
In step S720, a first position of the first game location currently mapped in the graphical user interface is determined according to the pose change of the virtual camera.
The specific embodiment of step S720 is identical to the specific embodiment of step S620 described above, and will not be described here again.
In step S730, a display size of the currently generated selection frame in the gui is determined according to the shortest connection line between the first position and the preset position.
In an exemplary embodiment, the selection box generated in step S730 may include a rectangle, a circle, or any other regular polygon, etc.
Taking the example that the generated selection frame is circular, the shortest connecting line between the first position and the preset position can be used as the radius of the generated circular selection frame, so as to determine the display size of the circular selection frame. Taking the example that the generated selection frame is rectangular, the length of the shortest connecting line between the first position and the preset position can be determined to be half of the length of the diagonal line of the generated rectangular selection frame, or the shortest connecting line between the first position and the preset position is taken as the side length of the generated rectangular selection frame, so that the display size of the currently generated rectangular selection frame in the graphical user interface can be determined in real time.
In step S740, a selection frame is generated in the graphical user interface based on the display size centering on the preset position.
For example, when the generated selection frame is circular, the preset position is the center of the circular selection frame, and when the generated selection frame is rectangular or any other regular polygon, the preset position is the intersection point of the diagonal lines. A selection frame may be generated centering on the preset position based on the display size determined in step S730.
In still another exemplary embodiment, the display size of the selection frame in the gui may be determined directly according to the sliding distance of the sliding operation.
For example, the increased size of the boundary line of the generated selection frame may be determined according to the sliding distance of the sliding operation map in the horizontal direction and/or the vertical direction, so as to obtain the current display size of the boundary line of the selection frame, and further, the display size of the selection frame in the graphical user interface may be determined based on the display size of each boundary line.
For example, a proportional relationship between a sliding distance of the sliding operation in the horizontal direction and/or the vertical direction and a display size of the pulled-out selection frame, the display boundary of which is currently increased, may be preset. Taking the generated selection frame as a rectangle as an example, for example, the proportional relationship between the sliding distance of the sliding operation in the horizontal direction and the current increasing length of the display boundary of the rectangle in the horizontal direction is preset to be 1:3, and the proportional relationship between the sliding distance of the sliding operation in the vertical direction and the current increasing length of the display boundary of the rectangle in the vertical direction is preset to be 1:3. If the sliding operation is initially sliding upward to the right, in this process of generating the selection frame again, the horizontal rightward direction is the positive direction, the vertical upward direction is the positive direction, when the sliding operation map is in the horizontal direction and/or the vertical direction is the positive direction, the increase length calculated according to the preset ratio is positive, otherwise is negative, that is, if the sliding distance of the sliding operation map initially performed by the user to the right upward in the horizontal direction is 1, the sliding distance mapped in the vertical direction is 1.5, it can be determined that the display size of the display boundary of the selection frame in the horizontal direction is 3, the display size of the display boundary in the vertical direction is 4.5, and the display size of the selection frame, that is, the display area is 13.5.
If the generated selection frame is a circle, the sliding distance of the sliding operation map in the horizontal direction or the vertical direction may be used to determine the increased length of the radius of the circle, thereby determining the display size of the circle selection frame in the graphical user interface. Similarly, the radius of the currently generated circular selection frame may be determined based on the sliding distance of the current sliding operation according to a preset proportional relationship between the sliding distance of the sliding operation and the increased length of the radius of the circular selection frame. Taking the example of determining the increasing length of the radius of the circle based on the sliding distance of the sliding operation map in the horizontal direction, the direction in which the sliding operation is initially mapped in the horizontal direction may be defined as a positive direction, for example, the horizontal right direction is a positive direction when the sliding operation map is initially slid upward to the right, and the increasing length of the radius is a positive value when the sliding operation map is mapped in the horizontal direction, otherwise is a negative value.
After determining the display size of the currently generated selection frame, a selection frame may be generated in the graphical user interface based on the currently determined display size with the preset position as a center.
Continuing with the above example of determining that the display size of the display boundary of the selection frame in the horizontal direction is 3, and the display size of the display boundary in the vertical direction is 4.5, a rectangular selection frame having the display size in the horizontal direction of 3 and the display size in the vertical direction of 4.5 and having the first position of the first game location currently mapped in the graphical user interface as the diagonal intersection is currently generated in the graphical user interface.
In the specific embodiments for generating the selection frames provided in fig. 6 and fig. 7, the selection frames are generated in real time based on the sliding operation of the user on the graphical user interface, so that the selection frames with different display sizes can be generated in real time according to the requirements of the user, so as to select the virtual units, thereby improving the efficiency and accuracy of selecting the virtual units. And because the pose of the virtual camera also changes in the process of generating the selection frame, the display position of the corresponding virtual unit in the graphical user interface also changes, and therefore, the user can select any virtual unit to be selected in the virtual scene in real time through the pose change of the virtual camera.
In step S240, a target virtual unit is selected from the plurality of virtual units according to the generated selection frame and the positions of the plurality of virtual units.
The virtual units may be weapon models in the game, building models in the game, or other scene models in the game. The virtual units may also include other virtual objects in the game that can be selected for game operations, which are not particularly limited in this exemplary embodiment.
Exemplary, the specific embodiment of step S240 may include: and when the touch medium and the graphical user interface are detected to change from a contact state to a non-contact state, selecting a target virtual unit from a plurality of virtual units according to the generated selection frame and the positions of the plurality of virtual units.
For example, when the touch medium leaves the graphical user interface, a target virtual unit may be selected from the plurality of virtual units based on the currently generated selection frame and the locations of the plurality of virtual units.
Illustratively, selecting a target virtual unit from a plurality of virtual units according to the generated selection frame and the positions of the plurality of virtual units, including: and selecting a target virtual unit from a plurality of virtual units according to the position overlapping degree between the current display position of the virtual unit in the graphical user interface and the currently generated selection frame.
The display position of the virtual unit may include a display area occupied by the virtual unit in the graphical user interface. The position overlapping degree can be understood as the overlapping rate between the area of the display area occupied by the display position of the virtual unit in the graphic user interface and the area of the display area occupied by the display position of the selection frame in the graphic user interface.
Taking an example in which a represents the area of the display area occupied by the display position of a certain virtual unit in the graphical user interface and B represents the area of the display area occupied by the display position of the selection frame in the graphical user interface. Then, the positional overlap ratio therebetween can be expressed as the following equation (1) or equation (2):
where A.andB represents the intersection of A and B, i.e., the area of the area where A overlaps B.
For example, when determining the target virtual unit according to the degree of positional overlap between the display position of the virtual unit and the overlap rate of the selection frame, the virtual unit having the degree of positional overlap greater than or equal to the preset threshold may be determined as the target virtual unit. The preset threshold value can be set in a self-defined manner according to actual situations or requirements, and can be any value which is larger than 0 and smaller than or equal to 1. Taking the above formula (1) as an example, when the preset threshold value is equal to 1, it may be indicated that the virtual unit in which the display position in the graphical user interface is fully contained in the selection frame is selected.
In an exemplary application scenario, after selecting a target virtual unit from a plurality of virtual units, a game player may control the selected target virtual unit to perform a corresponding game action according to his own demand. Such as controlling the movement of the target virtual units, attacking, grouping the target virtual units, etc., which is not particularly limited in the present exemplary embodiment.
In an exemplary embodiment, after selecting the target virtual unit from the plurality of virtual units, the method further includes: and canceling the display of the selection frame, and displaying the selected target virtual unit according to a preset selection mark.
For example, after selecting a target virtual unit in the game, indicating that the selection is completed, the display of the selection box may be canceled. Meanwhile, the finally selected target virtual units can be specially displayed according to the preset selection marks, so that the user can be clearly and prompted to which virtual units are currently selected. The preset selection mark may be customized according to the requirement, for example, a yellow circle may be added to the selected virtual unit, which is not limited in this exemplary embodiment.
In another exemplary embodiment, after the target virtual unit is selected from the plurality of virtual units, the display of the box selection mark may be canceled or the box selection mark may be modified from the target display state to the initial display state. Thus, when the user slides in the graphical user interface, the pose of the virtual camera can be adjusted based on the sliding operation of the user, so that the game scene shot in the visual field range of the virtual camera is updated, and the observation range and angle of the user on the game scene are adjusted, but no selection frame is generated at this time, and no selection operation is performed on the virtual unit. That is, the function of adjusting the visual field range and visual field content of the game scene observed by the game player based on the sliding operation of the touch medium in the graphical user interface is not affected under normal conditions.
Through the steps S210 to S240, when the frame selection mark is in the target display state, the gesture of the virtual camera in the game scene can be adjusted while the frame selection mark is generated in the graphical user interface based on the sliding operation of the touch medium in the graphical user interface, and the display position of the virtual unit in the graphical user interface can be changed due to the gesture change of the virtual camera, so that the user can conveniently select any virtual unit to be selected in the game scene based on the gesture change of the virtual camera and the generated selection frame, the accuracy of the frame selection of the virtual unit is ensured, and the frame selection efficiency of the virtual unit is improved.
In the embodiment shown in fig. 2, when the box selection mark is in the target display state, the gesture of the virtual camera may be adjusted based on the sliding operation of the touch medium in the graphical user interface, and the selection box may be generated in the graphical user interface to select the virtual unit.
In another virtual unit selection method provided by the present disclosure, when the frame selection mark is in the target display state, based on the sliding operation of the touch medium in the graphical user interface, the pose of the virtual camera may not be adjusted, but a selection frame is directly generated in the graphical user interface with a preset position as a starting point or a center, so as to select the virtual unit.
Fig. 8 is a flowchart illustrating another virtual unit selection method according to an exemplary embodiment of the present disclosure, in which a part or all of a game scene captured by a virtual camera and a game screen obtained by a plurality of virtual units located in the game scene are displayed through a graphical user interface. Referring to fig. 8, the method may include steps S810 to S840. Wherein:
step S810, responding to the sliding operation of the touch medium on the graphical user interface, and adjusting the pose of the virtual camera so as to update the game picture;
step S820, responding to the target triggering operation acted on the graphical user interface, entering a frame selection state and displaying a frame selection mark at a preset position of the graphical user interface, wherein the graphical user interface displays a game picture obtained by shooting the game scene and the virtual unit by a virtual camera in the current pose in the frame selection state;
step S830, in a frame selection state, responding to a sliding operation of a touch medium on a graphical user interface, and generating a selection frame in the graphical user interface based on the sliding operation by taking a preset position of the frame selection mark as a starting point or a center;
Step S840, selecting a target virtual unit from the plurality of virtual units according to the selection frame and the positions of the plurality of virtual units.
In the method provided in the embodiment shown in fig. 8, in the box selection state, the selection box may be directly generated by a sliding operation of the user in the graphical user interface with the box selection mark as a starting point or a center, so that the virtual unit is selected based on the generated selection box. Compared with the related art, on one hand, the method and the device can directly generate the selection frame through the frame selection mark so as to select the virtual units, thereby improving the selection efficiency of the virtual units; on the other hand, the selection frame can be directly generated by sliding operation in the graphical user interface, and compared with a mode of forming the selection frame by double-point touch, the method reduces shielding of the visual line of a player in the virtual unit selection process, and improves the experience of the player and the accuracy of game operation.
Next, each step in the embodiment shown in fig. 8 will be described in detail.
In step S810, in response to the sliding operation of the touch medium on the gui, the pose of the virtual camera is adjusted to update the game screen.
For example, before entering the box selection state described in step S820, based on the sliding operation of the touch medium in the graphical user interface, the pose of the virtual camera capturing the game scene may be adjusted to update the game screen currently displayed in the graphical user interface.
When the pose of the virtual camera changes before entering the frame selection state, the game screen displayed in the graphical user interface changes, and then the display position of the virtual unit in the graphical user interface also changes. However, it should be noted that the game location of the virtual unit in the game scene is not changed, but only the display position of the game location of the virtual unit in the game scene, which is currently mapped in the graphical user interface, changes with the pose of the virtual camera.
In step S820, in response to a target trigger operation acting on the gui, a frame selection state is entered and a frame selection mark is displayed at a preset position of the gui, where in the frame selection state, the gui displays a game screen obtained by capturing the game scene and the virtual unit with a virtual camera in a current pose.
In an exemplary embodiment, the box selection state may be understood as a game state when the box selection mark in the above step S210 is in the target display state. When the frame selection mark is in the target display state, the game is currently in the frame selection state, and in the frame selection state, the virtual camera does not change in pose along with the sliding operation of the touch medium, namely in the frame selection state, a game picture obtained by shooting a game scene and a virtual unit by the virtual camera in the current pose is displayed in the graphical user interface. A step of
The target trigger operation in step S820 is the same as the target trigger operation in step S210, and will not be described here.
In an exemplary embodiment, a box selection mark in an initial display state is displayed at a preset position of the graphical user interface; the responding to the target triggering operation acted on the graphical user interface, entering a box selection state and displaying a box selection mark at a preset position of the graphical user interface comprises the following steps: and responding to a target triggering operation acted on the graphical user interface, entering a box selection state and modifying the state of the box selection mark from the initial display state to a target display state.
In another exemplary embodiment, the entering a box selection state and displaying a box selection mark at a preset position of the graphical user interface in response to a target trigger operation acting on the graphical user interface includes: responding to an initial triggering operation acted on a graphical user interface, displaying a frame selection mark at a preset position of the graphical user interface, and configuring the state of the frame selection mark as an initial display state; and responding to a target triggering operation acted on the graphical user interface, entering a box selection state and modifying the state of the box selection mark from the initial display state to a target display state. The initial triggering operation and the target triggering operation may refer to the description in fig. 4, and are not described herein.
When a preset control capable of triggering to enter the frame selection state is preconfigured in the graphical user interface, the specific implementation manner of step S820 may include: and responding to target triggering operation acted on a preset control in the graphical user interface, entering a box selection state and displaying a box selection mark at a preset position of the graphical user interface.
In the frame selection state, the graphical user interface displays a game picture obtained by shooting the game scene and the virtual unit by the virtual camera while keeping the current pose. That is, in the box selection state, the game screen displayed in the graphical user interface is not updated along with the sliding operation of the touch medium in the graphical user interface.
Next, in step S830, in response to the sliding operation of the touch medium on the gui, a selection frame is generated in the gui based on the sliding operation with the preset position of the frame selection mark as the starting point or center.
In an exemplary embodiment, the sliding operation of the touch medium on the gui in step S830 may also include a sliding operation of the touch medium in a preset sliding control area on the gui, which is the same as the sliding operation in step S220. The explanation of the sliding control area may refer to the explanation in step S220, which is not described herein.
For example, when starting from a preset position where the box selection mark is located, a selection box may be generated in the graphical user interface based on a sliding operation by performing the steps in fig. 9. When centering on a preset position where the box selection mark is located, a selection box may be generated in the graphical user interface based on a sliding operation by performing the steps in fig. 11.
The following describes the specific embodiment of step S830 in detail with reference to fig. 9 to 11.
Fig. 9 is a flowchart illustrating a method for generating a selection box using a preset position as a starting point in an exemplary embodiment of the present disclosure. Referring to fig. 9, the method may include steps S910 to S930.
In step S910, according to the movement direction of the sliding operation map in the horizontal direction and/or the vertical direction, the generation direction of the selection frame in the horizontal direction and/or the vertical direction is correspondingly determined. For example, the moving direction of the sliding operation map in the horizontal direction and/or the vertical direction coincides with the generating direction of the selection frame in the corresponding direction. That is, the movement direction of the sliding operation in the horizontal direction is the generation direction of the selection frame in the horizontal direction, and the movement direction in the vertical direction is the generation direction of the selection frame in the vertical direction. And the moving direction mapped in the horizontal direction and/or the horizontal direction at the beginning of the sliding operation is taken as the positive direction of the generating direction of the selection frame in the corresponding direction.
If the sliding operation is started as the upper right, the selection frame may be pulled out from the preset position to the right, that is, the horizontal right direction is the positive direction generated by the selection frame, and the vertical upward direction is the positive direction of the selection frame. When the sliding operation to the right up is continued to slide down to the left, the generation direction of the selection frame in the horizontal direction is negative, and the generation direction in the vertical direction is also negative.
Next, in step S920, the display size of the boundary of the selection frame in the horizontal direction and/or the vertical direction is correspondingly determined according to the moving distance of the sliding operation map in the horizontal direction and/or the vertical direction.
For example, a proportional relationship between a sliding distance of the sliding operation in the horizontal direction and/or the vertical direction and a display size of the pulled-out selection frame, the display boundary of which is currently increased, may be preset. And a direction mapped in the horizontal direction and/or the vertical direction at the beginning of the sliding operation is defined as a positive direction generated by the selection frame, and a direction opposite to the positive direction is defined as a negative direction generated by the selection frame.
If the slide operation is initially slid upward to the right, in this process of generating the selection frame, the horizontal rightward movement direction is the positive direction of the generation direction of the horizontal direction of the selection frame, and the vertical upward direction is the positive direction of the generation direction of the vertical direction of the selection frame. When the current generation direction in the horizontal and/or vertical direction is positive, the increase length of the display size of the selection frame in the direction calculated according to the preset proportion is positive, otherwise, is negative, and then the display size of the selection frame in the horizontal and/or vertical direction is determined based on the current increase display size of the display boundary of the selection frame.
In step S930, the selection frame is generated in the graphical user interface based on the generation direction and the display size, starting from the preset position.
For example, the selection box in step S930 may include a rectangle. Specifically, the selection frame may be generated in real time based on the determined current generation direction in the horizontal direction and/or the vertical direction and the current increased display size of the display boundary of the selection frame in the corresponding direction, with the preset position as a starting point.
Fig. 10 illustrates exemplary selection boxes generated according to the above-described steps S910 to S930. As shown at 101 in fig. 10. In fig. 10, since the movement of the virtual camera is not performed during the generation of the selection frame, the first position of the first game site mapped in the game scene by the frame selection mark before the generation of the selection frame in the graphical user interface is not changed during the generation of the selection frame and is still at the preset position corresponding to the frame selection mark.
Next, fig. 11 is a flowchart illustrating a method for generating a selection box centering on a preset position in an exemplary embodiment of the present disclosure. Referring to fig. 11, the method may include steps S1110 to S1120. Wherein:
in step S1110, a display size of the selection frame in the gui is determined according to the sliding distance of the sliding operation.
For example, the increased size of the boundary line of the generated selection frame may be determined according to the sliding distance of the sliding operation map in the horizontal direction and/or the vertical direction, so as to obtain the current display size of the boundary line of the selection frame, and further, the display size of the selection frame in the graphical user interface may be determined based on the display size of each boundary line.
For example, a proportional relationship between a sliding distance of the sliding operation in the horizontal direction and/or the vertical direction and a display size of the pulled-out selection frame, the display boundary of which is currently increased, may be preset. Taking the generated selection frame as a rectangle as an example, for example, the proportional relationship between the sliding distance of the sliding operation in the horizontal direction and the current increasing length of the display boundary of the rectangle in the horizontal direction is preset to be 1:3, and the proportional relationship between the sliding distance of the sliding operation in the vertical direction and the current increasing length of the display boundary of the rectangle in the vertical direction is preset to be 1:3. If the sliding operation is initially sliding upward to the right, in this process of generating the selection frame again, the horizontal rightward direction is the positive direction, the vertical upward direction is the positive direction, when the sliding operation map is in the horizontal direction and/or the vertical direction is the positive direction, the increase length calculated according to the preset ratio is positive, otherwise is negative, that is, if the sliding distance of the sliding operation map initially performed by the user to the right upward in the horizontal direction is 1, the sliding distance mapped in the vertical direction is 1.5, it can be determined that the display size of the display boundary of the selection frame in the horizontal direction is 3, the display size of the display boundary in the vertical direction is 4.5, and the display size of the selection frame, that is, the display area is 13.5.
If the generated selection frame is a circle, the sliding distance of the sliding operation map in the horizontal direction or the vertical direction may be used to determine the increased length of the radius of the circle, thereby determining the display size of the circle selection frame in the graphical user interface. Similarly, the radius of the currently generated circular selection frame may be determined based on the sliding distance of the current sliding operation according to a preset proportional relationship between the sliding distance of the sliding operation and the increased length of the radius of the circular selection frame. Taking the example of determining the increasing length of the radius of the circle based on the sliding distance of the sliding operation map in the horizontal direction, the direction in which the sliding operation is initially mapped in the horizontal direction may be defined as a positive direction, for example, the horizontal right direction is a positive direction when the sliding operation map is initially slid upward to the right, and the increasing length of the radius is a positive value when the sliding operation map is mapped in the horizontal direction, otherwise is a negative value.
In step S1120, a selection frame is generated in the graphical user interface based on the display size centering on the preset position.
Illustratively, the selection box in step S1120 may include a rectangle, a circle, or any other regular polygon, etc. Continuing to take the example that the display size of the display boundary of the determined selection frame in the horizontal direction is 3 and the display size of the display boundary in the vertical direction is 4.5 as the determination example, a rectangular selection frame with the display size in the horizontal direction being 3, the display size in the vertical direction being 4.5 and the preset position being the intersection point of the diagonal lines is generated in the graphical user interface. Next, in step S840, a target virtual unit is selected from the plurality of virtual units according to the selection frame and the positions of the plurality of virtual units.
Illustratively, selecting a target virtual unit from a plurality of virtual units according to the selection frame and the positions of the plurality of virtual units, including: when the touch medium and the graphical user interface are detected to change from a contact state to a non-contact state, selecting a target virtual unit from a plurality of virtual units according to the currently generated selection frame and the positions of the plurality of virtual units.
Illustratively, selecting a target virtual unit from a plurality of virtual units according to the selection frame and the positions of the plurality of virtual units, including: and selecting a target virtual unit from a plurality of virtual units according to the display position of the virtual unit in the graphical user interface and the generated overlapping degree of the selection frame.
For the specific embodiment of step S840, reference may be made to step S240 described above, and details are not repeated here.
In an exemplary application scenario, after selecting a target virtual unit from a plurality of virtual units, a game player may control the selected target virtual unit to perform a corresponding game action according to his own demand. Such as controlling the movement of the target virtual units, attacking, grouping the target virtual units, etc., which is not particularly limited in the present exemplary embodiment.
In an exemplary embodiment, after selecting the target virtual unit from the plurality of virtual units, the method further includes: and canceling the display of the selection frame, and displaying the selected target virtual unit according to a preset selection mark.
In another exemplary embodiment, after a target virtual unit is selected from the plurality of virtual units, the display of the framing mark may be canceled or the framing mark may be modified from the target display state to the initial display state while the current game is caused to exit the framing state.
Thus, in the non-frame selection state, the sliding operation of the touch medium controls the movement of the virtual camera to update the game picture obtained by shooting the game scene in the visual field of the virtual camera. In the frame selection state, the sliding operation of the touch medium controls the generation of a selection frame, but cannot control the movement of the virtual camera, and after the frame selection state is exited, the position and posture adjustment control of the sliding operation on the virtual camera can be restored so as to update a game picture obtained by shooting a game scene in the visual field range.
In an exemplary embodiment, in order to quickly select the virtual unit, before activating the box selection marker to select the virtual unit, that is, in a non-box selection state, the pose of the virtual camera may be adjusted based on the sliding operation, so as to move the display position of the virtual unit in the graphical user interface approximately to the vicinity of the box selection marker. And then activating the frame selection mark, namely entering a frame selection state, and generating a selection frame based on the sliding operation to select the virtual units. In this way, since the display position of the virtual unit to be selected in the graphical user interface is moved approximately to the vicinity of the frame selection mark before the selection frame is generated, the virtual unit to be selected can be quickly selected when the selection frame is generated with the frame selection mark as the starting point or center.
For example, taking a touch medium as an example of a finger of a user, when the user wants to select a virtual unit in a game, the user can slide with the finger (right finger) in the sliding control area to adjust the pose of the virtual camera, and then aim the frame selection mark located at a preset position of the graphical user interface and in an initial display state at a starting point or a center point of a selection frame to be generated. After alignment, the user's finger (left finger) may click on the preset control, at which point the box selection marker in the initial display state may be activated, i.e., configured to the target display state and the current game brought into the box selection state.
When the box selection flag is configured to the target display state, it means that a selection box can be generated for selection of a virtual unit based on a slide operation at this time. And the user's finger (left hand finger) may leave the preset control and then slide in the graphical user interface using the right hand finger to generate the selection box. When the virtual unit to be selected is framed into the generated selection frame, the right finger of the user can stop sliding, and at this time, if the right finger of the user leaves the graphical user interface, the virtual unit with the position overlapping degree greater than the preset threshold value can be selected according to the position overlapping degree between the display area occupied by the current display position of the virtual unit in the graphical user interface and the current generated selection frame.
In an exemplary embodiment, when generating a rectangular selection frame with a preset position as a starting point, if a vertex on the same diagonal as the preset position is on the right side of the starting point, it means that the selection frame is established from left to right, and conversely, it means that the selection frame is established from right to left.
When the selection frame is established from left to right, as shown in fig. 12, all virtual units included in the selection frame may be selected, e.g., a virtual unit having an overlap ratio with the selection frame of more than 85% may be selected. When the selection frame is established from right to left, as shown in fig. 13, all virtual units that are in contact with the selection frame may be selected, for example, virtual units having an overlap ratio with the selection frame of greater than 10% may be selected. That is, the preset threshold corresponding to the overlapping rate when the selection frame is established from left to right may be larger than the preset threshold corresponding to the overlapping rate when the selection frame is established from right to left, or of course, it may be also reverse, or it may not be discriminated whether the selection frame is established from left to right or from right to left, and only one preset threshold corresponding to the overlapping rate is set.
In an exemplary embodiment, after the virtual units are selected, the virtual units that are successfully selected may be specially marked to remind the user which virtual units are currently selected, and the generated selection frame may be cancelled. At the same time, the frame selection mark is reconfigured to be in an initial display state, and the preset control is also revised from the second display state to the first display state, so as to remind the user of exiting the frame selection state at the moment.
It should be noted that, the virtual unit selection method in the game provided in this exemplary embodiment may be performed at the server side, or may be performed at the client side, or may be performed in the process of the interaction between the client side and the server side, for example, the server side calculates the display size of the selection frame and the display position of the target game location in the graphical user interface currently based on the sliding operation data sent by the client side, and then sends the data to the client side, and the client side generates the selection frame in the graphical user interface of the client side based on the data, which is not limited in this exemplary embodiment.
The virtual unit selection method provided by the exemplary embodiment of the disclosure can improve the efficiency of virtual unit selection, simplify the operation steps during virtual unit selection, and improve the usability of virtual unit selection operation. Further, the method provided by the exemplary embodiment of the present disclosure may enable a user to control the size of the generated selection frame by one hand to select the virtual unit, so as to reduce the shielding of the operation in the process of selecting the virtual unit on the game screen.
Those skilled in the art will appreciate that all or part of the steps implementing the above embodiments are implemented as a computer program executed by a CPU. When executed by a CPU, performs the functions defined by the above-described method provided by the present invention. The program may be stored in a computer readable storage medium, which may be a read-only memory, a magnetic disk or an optical disk, etc.
Furthermore, it should be noted that the above-described figures are merely illustrative of the processes involved in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Fig. 14 shows a schematic structural diagram of a virtual unit selection device in a game in an exemplary embodiment of the present disclosure, which displays a game screen of the game through a graphical user interface, the game screen including a part or all of a game scene and a plurality of virtual units located in the game scene. For example, referring to fig. 14, the apparatus 1400 may include a box selection marker display module 1410, a box selection marker movement module 1420, a selection box generation module 1430, and a virtual unit selection module 1440. Wherein:
a box tab display module 1410 configured to display a box tab located at a preset position of the graphical user interface and determine that the box tab maps to a first position game location in the game scene in response to a target trigger operation acting on the graphical user interface;
A box selection mark moving module 1420 configured to control the box selection mark to move in the game scene in response to a sliding operation of the touch medium on the graphical user interface;
a selection frame generation module 1430 configured to determine a second game place of the game scene to which the frame selection mark at the preset position is mapped in response to the end of the sliding operation, to generate a selection frame in the graphical interface based on the first game place and the second game place;
the virtual unit selection module 1440 is configured to select a target virtual unit from a plurality of virtual units according to the generated selection frame and the positions of the plurality of virtual units.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, a game screen obtained by photographing a part or all of a game scene and a plurality of virtual units located in the game scene by a virtual camera is displayed through a graphical user interface; the box selection marker movement module 1420 is also specifically configured to: responding to the sliding operation of the touch medium on the graphical user interface, and adjusting the pose of the virtual camera so as to change the game picture displayed in the graphical user interface; and changing the game place of the frame selection mark mapped in the game scene at the preset position based on the change of the game picture displayed in the graphical user interface so as to control the frame selection mark to move in the game scene.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, a frame selection mark in an initial display state is displayed at a preset position of the graphical user interface; the box selection marker display module 1410 is further specifically configured to: and in response to a target triggering operation acting on the graphical user interface, modifying the state of the box selection marker from the initial display state to a target display state.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the box selection marker display module 1410 may be further specifically configured to: responding to an initial triggering operation acted on a graphical user interface, displaying a frame selection mark at a preset position of the graphical user interface, and configuring the state of the frame selection mark as an initial display state; and in response to a target triggering operation acting on the graphical user interface, modifying the state of the box selection marker from the initial display state to a target display state.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the box selection marker movement module 1420 is further specifically configured to: and when the frame selection mark is in the target display state, responding to the sliding operation of the touch medium on the graphical user interface, and controlling the frame selection mark to move in the game scene.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the box selection marker display module 1410 is further specifically configured to: and responding to target triggering operation acted on a preset control in the graphical user interface, and displaying a frame selection mark positioned at a preset position of the graphical user interface.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the selection box generating module 1430 is further specifically configured to:
generating a selection frame based on a first position of the first game place mapping in the graphical user interface and a second position of the second game place mapping in the graphical user interface, wherein the second position and the preset position are overlapped.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the generating a selection box based on the first location of the first game place map in the graphical user interface and the second location of the second game place map in the graphical user interface includes:
a rectangular selection box is generated in the graphical user interface by taking the shortest connecting line between the first position and the second position as a diagonal line.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the generating a selection box based on the first location of the first game place map in the graphical user interface and the second location of the second game place map in the graphical user interface includes:
And generating a circular selection frame in the graphical user interface by taking the preset position as a circle center and the shortest connecting line between the first position and the second position as a radius.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the virtual unit selection module 1440 is further specifically configured to:
and when the touch medium and the graphical user interface are detected to change from a contact state to a non-contact state, selecting a target virtual unit from a plurality of virtual units according to the generated selection frame and the positions of the plurality of virtual units.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the virtual unit selection module 1440 is further specifically configured to: and selecting a target virtual unit from a plurality of virtual units according to the position overlapping degree between the current display position of the virtual unit in the graphical user interface and the currently generated selection frame.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the graphic user interface includes a sliding control area, where the sliding control area is an area determined according to mapping positions of the plurality of virtual units in real time in the game scene into mapping positions in the graphic user interface;
The responding to the sliding operation of the touch medium on the graphical user interface comprises the following steps: and responding to the sliding operation of the touch medium in the sliding control area.
Fig. 15 illustrates another virtual unit selection apparatus in a game in an exemplary embodiment of the present disclosure. The device displays a part or all of game scenes shot by the virtual camera and game pictures obtained by a plurality of virtual units positioned in the game scenes through a graphical user interface. The apparatus 1500 includes: a game screen update module 1510, a box selection flag display module 1520, a selection box generation module 1530, and a virtual unit selection module 1540. Wherein:
a game screen update module 1510 configured to adjust the pose of the virtual camera in response to a sliding operation of the touch medium on the graphical user interface, to update the game screen;
a frame selection mark display module 1520 configured to enter a frame selection state in response to a target trigger operation acting on the graphical user interface and display a frame selection mark at a preset position of the graphical user interface, wherein in the frame selection state, the graphical user interface displays a game picture obtained by photographing the game scene and the virtual unit with a virtual camera maintaining a current pose;
The selection frame generating module 1530 is configured to respond to a sliding operation of the touch medium on the graphical user interface in a frame selection state, and generate a selection frame in the graphical user interface based on the sliding operation by taking a preset position where the frame selection mark is located as a starting point or a center;
the virtual unit selection module 1540 is configured to select a target virtual unit from a plurality of virtual units according to the selection frame and the positions of the plurality of virtual units.
In an exemplary embodiment of the present disclosure, a frame selection mark in an initial display state is displayed at a preset position of the graphical user interface; based on this, the box selection marker display module 1520 may be further specifically configured to: and responding to a target triggering operation acted on the graphical user interface, entering a box selection state and modifying the state of the box selection mark from the initial display state to a target display state.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the box selection marker display module 1520 may be further specifically configured to: responding to an initial triggering operation acted on a graphical user interface, displaying a frame selection mark at a preset position of the graphical user interface, and configuring the state of the frame selection mark as an initial display state; and responding to a target triggering operation acted on the graphical user interface, entering a box selection state and modifying the state of the box selection mark from the initial display state to a target display state.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the box selection marker display module 1520 may be further configured to:
and responding to target triggering operation acted on a preset control in the graphical user interface, entering a box selection state and displaying a box selection mark at a preset position of the graphical user interface.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the selection box generating module 1530 is specifically configured to:
generating a selection box in the graphical user interface based on the sliding operation by performing the following process when starting from a preset position where the box selection mark is located:
correspondingly determining the generation direction of the selection frame in the horizontal direction and/or the vertical direction according to the movement direction of the sliding operation map in the horizontal direction and/or the vertical direction;
correspondingly determining the display size of the boundary of the selection frame in the horizontal direction and/or the vertical direction according to the moving distance of the sliding operation map in the horizontal direction and/or the vertical direction;
generating the selection frame in a graphical user interface based on the generation direction and the display size by taking the preset position as a starting point;
Generating a selection box in the graphical user interface based on the sliding operation by performing the following process while centering on a preset position where the box selection mark is located:
determining the display size of the selection frame in the graphical user interface according to the sliding distance of the sliding operation;
and generating a selection frame in a graphical user interface based on the display size by taking the preset position as a center.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the virtual unit selection module 1540 is specifically configured to:
when the touch medium and the graphical user interface are detected to change from a contact state to a non-contact state, selecting a target virtual unit from a plurality of virtual units according to the currently generated selection frame and the positions of the plurality of virtual units.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the virtual unit selection module 1540 is specifically configured to:
and selecting a target virtual unit from a plurality of virtual units according to the display position of the virtual unit in the graphical user interface and the generated position overlapping degree between the selection frames.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the graphic user interface includes a sliding control area, where the sliding control area is an area determined according to mapping positions of the plurality of virtual units in real time in the game scene into mapping positions in the graphic user interface;
the responding to the sliding operation of the touch medium on the graphical user interface comprises the following steps: and responding to the sliding operation of the touch medium in the sliding control area.
The specific details of each module in the above-mentioned virtual unit selection device in the game have been described in detail in the corresponding virtual unit selection method in the game, and therefore will not be described here again.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the steps of the methods in the present disclosure are depicted in a particular order in the drawings, this does not require or imply that the steps must be performed in that particular order or that all illustrated steps be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a mobile terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer storage medium capable of implementing the above method is also provided. On which a program product is stored which enables the implementation of the method described above in the present specification. In some possible embodiments, the various aspects of the present disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of this specification, when the program product is run on the terminal device.
Referring to fig. 16, a program product 1600 for implementing the above-described method according to an embodiment of the present disclosure is described, which may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 1700 according to such an embodiment of the present disclosure is described below with reference to fig. 17. The electronic device 1700 shown in fig. 17 is merely an example and should not be construed as limiting the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 17, the electronic device 1700 is in the form of a general purpose computing device. The components of electronic device 1700 may include, but are not limited to: the at least one processing unit 1710, the at least one storage unit 1720, a bus 1730 connecting different system components (including the storage unit 1720 and the processing unit 1710), and a display unit 1740.
Wherein the storage unit stores program code that is executable by the processing unit 1710, such that the processing unit 1710 performs steps according to various exemplary embodiments of the present disclosure described in the above section of the "exemplary method" of the present specification. For example, the processing unit 1710 may perform the various steps as shown in fig. 2.
The storage unit 1720 may include a readable medium in the form of a volatile storage unit, such as a random access memory unit (RAM) 17201 and/or a cache memory unit 17202, and may further include a read only memory unit (ROM) 17203.
The storage unit 1720 may also include a program/utility 17204 having a set (at least one) of program modules 8205, such program modules 17205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 1730 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, a graphics accelerator port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 1700 may also communicate with one or more external devices 1800 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 1700, and/or any device (e.g., router, modem, etc.) that enables the electronic device 1700 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 1750. Also, electronic device 1700 can communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, for example, the Internet, through network adapter 1760. As shown, network adapter 1760 communicates with other modules of electronic device 1700 via bus 1730. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with the electronic device 41700, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are only schematic illustrations of processes included in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (24)

1. A virtual unit selection method in a game, characterized in that a game picture of the game is displayed through a graphical user interface, the game picture comprises a part or all of a game scene and a plurality of virtual units positioned in the game scene, and the method comprises the following steps:
responsive to a target triggering operation acting on the graphical user interface, displaying a box selection marker located at a preset position of the graphical user interface and determining that the box selection marker is mapped to a first game location in the game scene; the display position of the box selection mark in the graphical user interface is fixed;
controlling the frame selection mark to move in the game scene in response to the sliding operation of the touch medium on the graphical user interface; the graphic user interface comprises a sliding control area, wherein the sliding control area is an area where the position of any virtual unit displayed in the graphic user interface is not overlapped;
determining, in response to an end of the sliding operation, a second game location in the game scene to which the box selection marker at the preset position is mapped, to generate a selection box in the graphical user interface based on the first game location and the second game location;
And selecting a target virtual unit from the plurality of virtual units according to the generated selection frame and the positions of the plurality of virtual units.
2. The method for selecting virtual units in a game according to claim 1, wherein a game screen obtained by photographing a part or all of a game scene and a plurality of virtual units located in the game scene is displayed through a graphical user interface;
the controlling the frame selection mark to move in the game scene in response to the sliding operation of the touch medium on the graphical user interface comprises the following steps:
responding to the sliding operation of the touch medium on the graphical user interface, and adjusting the pose of the virtual camera so as to change the game picture displayed in the graphical user interface;
and changing the game place of the frame selection mark mapped in the game scene at the preset position based on the change of the game picture displayed in the graphical user interface so as to control the frame selection mark to move in the game scene.
3. The method for selecting virtual units in a game according to claim 1, wherein a frame selection mark in an initial display state is displayed at a preset position of the graphical user interface;
The method for displaying the frame selection mark at the preset position of the graphical user interface comprises the following steps of:
and in response to a target triggering operation acting on the graphical user interface, modifying the state of the box selection marker from the initial display state to a target display state.
4. The method of claim 1, wherein displaying a box selection marker at a preset position of the graphical user interface in response to a target trigger operation acting on the graphical user interface, comprises:
responding to an initial triggering operation acted on a graphical user interface, displaying a frame selection mark at a preset position of the graphical user interface, and configuring the state of the frame selection mark as an initial display state;
and in response to a target triggering operation acting on the graphical user interface, modifying the state of the box selection marker from the initial display state to a target display state.
5. The method according to claim 3 or 4, wherein controlling the movement of the box selection mark in the game scene in response to a sliding operation of the touch medium in the graphical user interface comprises:
And when the frame selection mark is in the target display state, responding to the sliding operation of the touch medium on the graphical user interface, and controlling the frame selection mark to move in the game scene.
6. The method of claim 1, wherein displaying a box selection marker at a preset position of the graphical user interface in response to a target trigger operation acting on the graphical user interface, comprises:
and responding to target triggering operation acted on a preset control in the graphical user interface, and displaying a frame selection mark positioned at a preset position of the graphical user interface.
7. The method of claim 1, wherein generating a selection box based on the first game location and the second game location comprises:
generating a selection frame based on a first position of the first game place mapping in the graphical user interface and a second position of the second game place mapping in the graphical user interface, wherein the second position and the preset position are overlapped.
8. The method of claim 7, wherein generating a selection box based on the first location of the first game location map in the graphical user interface and the second location of the second game location map in the graphical user interface comprises:
A rectangular selection box is generated in the graphical user interface by taking the shortest connecting line between the first position and the second position as a diagonal line.
9. The method of claim 7, wherein generating a selection box based on the first location of the first game location map in the graphical user interface and the second location of the second game location map in the graphical user interface comprises:
and generating a circular selection frame in the graphical user interface by taking the preset position as a circle center and the shortest connecting line between the first position and the second position as a radius.
10. The method of selecting a virtual unit in a game according to claim 1, wherein selecting a target virtual unit from a plurality of virtual units based on the generated selection frame and the positions of the plurality of virtual units, comprises:
and when the touch medium and the graphical user interface are detected to change from a contact state to a non-contact state, selecting a target virtual unit from a plurality of virtual units according to the generated selection frame and the positions of the plurality of virtual units.
11. The method of selecting a virtual unit in a game according to claim 1, wherein selecting a target virtual unit from a plurality of virtual units based on the generated selection frame and the positions of the plurality of virtual units, comprises:
and selecting a target virtual unit from a plurality of virtual units according to the position overlapping degree between the current display position of the virtual unit in the graphical user interface and the currently generated selection frame.
12. The method according to claim 1, wherein the graphic user interface includes a sliding control area, wherein the sliding control area is an area determined according to mapping positions of the plurality of virtual units in real time in a game scene to mapping positions in the graphic user interface;
the responding to the sliding operation of the touch medium on the graphical user interface comprises the following steps: and responding to the sliding operation of the touch medium in the sliding control area.
13. A virtual unit selection method in a game is characterized in that a part or all of a game scene shot by a virtual camera and a game picture obtained by a plurality of virtual units positioned in the game scene are displayed through a graphical user interface, and the method comprises the following steps:
Responding to the sliding operation of the touch medium on the graphical user interface, and adjusting the pose of the virtual camera to update the game picture; the graphic user interface comprises a sliding control area, wherein the sliding control area is an area where the position of any virtual unit displayed in the graphic user interface is not overlapped;
responding to target triggering operation acted on the graphical user interface, entering a frame selection state and displaying a frame selection mark at a preset position of the graphical user interface, wherein the graphical user interface displays a game picture obtained by the virtual camera shooting the game scene and the virtual unit in the current pose in the frame selection state; the display position of the box selection mark in the graphical user interface is fixed;
in a frame selection state, responding to the sliding operation of a touch medium on a graphical user interface, and generating a selection frame in the graphical user interface based on the sliding operation by taking a preset position of a frame selection mark as a starting point or a center;
and selecting a target virtual unit from the plurality of virtual units according to the selection frame and the positions of the plurality of virtual units.
14. The method for selecting virtual units in a game according to claim 13, wherein a frame selection mark in an initial display state is displayed at a preset position of the graphical user interface;
the responding to the target triggering operation acted on the graphical user interface, entering a box selection state and displaying a box selection mark at a preset position of the graphical user interface comprises the following steps:
and responding to a target triggering operation acted on the graphical user interface, entering a box selection state and modifying the state of the box selection mark from the initial display state to a target display state.
15. The method of claim 13, wherein entering a box state and displaying a box mark at a preset position of the graphical user interface in response to a target trigger operation acting on the graphical user interface comprises:
responding to an initial triggering operation acted on a graphical user interface, displaying a frame selection mark at a preset position of the graphical user interface, and configuring the state of the frame selection mark as an initial display state;
and responding to a target triggering operation acted on the graphical user interface, entering a box selection state and modifying the state of the box selection mark from the initial display state to a target display state.
16. The method of claim 13, wherein entering a box state and displaying a box mark at a preset position of the graphical user interface in response to a target trigger operation acting on the graphical user interface comprises:
and responding to target triggering operation acted on a preset control in the graphical user interface, entering a box selection state and displaying a box selection mark at a preset position of the graphical user interface.
17. The method of claim 13, wherein generating a selection box in the graphical user interface based on the sliding operation with the box selection mark as a start point or a center, comprises:
generating a selection box in the graphical user interface based on the sliding operation by performing the following process when starting from a preset position where the box selection mark is located:
correspondingly determining the generation direction of the selection frame in the horizontal direction and/or the vertical direction according to the movement direction of the sliding operation map in the horizontal direction and/or the vertical direction;
correspondingly determining the display size of the boundary of the selection frame in the horizontal direction and/or the vertical direction according to the moving distance of the sliding operation map in the horizontal direction and/or the vertical direction;
Generating the selection frame in a graphical user interface based on the generation direction and the display size by taking the preset position as a starting point;
generating a selection box in the graphical user interface based on the sliding operation by performing the following process while centering on a preset position where the box selection mark is located:
determining the display size of the selection frame in the graphical user interface according to the sliding distance of the sliding operation;
and generating a selection frame in a graphical user interface based on the display size by taking the preset position as a center.
18. The method of selecting a virtual unit in a game according to claim 13, wherein selecting a target virtual unit from a plurality of virtual units according to the selection frame and the positions of the plurality of virtual units, comprises:
when the touch medium and the graphical user interface are detected to change from a contact state to a non-contact state, selecting a target virtual unit from a plurality of virtual units according to the currently generated selection frame and the positions of the plurality of virtual units.
19. The method of selecting a virtual unit in a game according to claim 13, wherein selecting a target virtual unit from a plurality of virtual units according to the selection frame and the positions of the plurality of virtual units, comprises:
And selecting a target virtual unit from a plurality of virtual units according to the display position of the virtual unit in the graphical user interface and the position overlapping degree between the currently generated selection frames.
20. The method according to claim 13, wherein the graphic user interface includes a sliding control area, wherein the sliding control area is an area determined according to mapping positions of the plurality of virtual units in real time in the game scene to mapping positions in the graphic user interface;
the responding to the sliding operation of the touch medium on the graphical user interface comprises the following steps: and responding to the sliding operation of the touch medium in the sliding control area.
21. A virtual unit selection apparatus in a game, wherein a game screen of the game is presented through a graphical user interface, the game screen including a part or all of a game scene and a plurality of virtual units located in the game scene, the apparatus comprising:
a box selection mark display module configured to display a box selection mark located at a preset position of a graphical user interface and determine that the box selection mark is mapped to a first game place in a game scene in response to a target trigger operation acting on the graphical user interface; the display position of the box selection mark in the graphical user interface is fixed;
A frame selection mark moving module configured to control the frame selection mark to move in the game scene in response to a sliding operation of the touch medium on a graphical user interface; the graphic user interface comprises a sliding control area, wherein the sliding control area is an area where the position of any virtual unit displayed in the graphic user interface is not overlapped;
a selection frame generation module configured to determine, in response to an end of the sliding operation, a second game place in the game scene to which a frame selection mark at the preset position is mapped, to generate a selection frame in the graphical user interface based on the first game place and the second game place;
and a virtual unit selection module configured to select a target virtual unit from a plurality of virtual units according to the generated selection frame and the positions of the plurality of virtual units.
22. A virtual unit selecting device in a game, characterized in that a part or all of a game scene shot by a virtual camera and a game picture obtained by a plurality of virtual units positioned in the game scene are displayed through a graphic user interface, comprising:
the game picture updating module is configured to respond to the sliding operation of the touch medium on the graphical user interface and adjust the pose of the virtual camera so as to update the game picture; the graphic user interface comprises a sliding control area, wherein the sliding control area is an area where the position of any virtual unit displayed in the graphic user interface is not overlapped;
A frame selection mark display module configured to enter a frame selection state in response to a target trigger operation acting on the graphical user interface and display a frame selection mark at a preset position of the graphical user interface, wherein the graphical user interface displays a game picture obtained by the virtual camera shooting the game scene and the virtual unit in the current pose in the frame selection state; the display position of the box selection mark in the graphical user interface is fixed;
the selection frame generation module is configured to respond to the sliding operation of the touch medium on the graphical user interface in a frame selection state, and generates a selection frame in the graphical user interface based on the sliding operation by taking the preset position of the frame selection mark as a starting point or a center;
and a virtual unit selection module configured to select a target virtual unit from a plurality of virtual units according to the selection frame and the positions of the plurality of virtual units.
23. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a processor, implements the virtual unit selection method in a game according to any one of claims 1 to 20.
24. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the virtual unit selection method in a game as claimed in any one of claims 1 to 20.
CN202110863742.7A 2021-07-29 2021-07-29 Virtual unit selection method and device in game, storage medium and electronic equipment Active CN113559501B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110863742.7A CN113559501B (en) 2021-07-29 2021-07-29 Virtual unit selection method and device in game, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110863742.7A CN113559501B (en) 2021-07-29 2021-07-29 Virtual unit selection method and device in game, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113559501A CN113559501A (en) 2021-10-29
CN113559501B true CN113559501B (en) 2024-02-02

Family

ID=78168940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110863742.7A Active CN113559501B (en) 2021-07-29 2021-07-29 Virtual unit selection method and device in game, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113559501B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114442898B (en) * 2022-01-29 2023-08-22 网易(杭州)网络有限公司 Information processing method, device, electronic equipment and readable medium
CN115591234A (en) * 2022-10-14 2023-01-13 网易(杭州)网络有限公司(Cn) Display control method and device for virtual scene, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000245969A (en) * 1999-02-25 2000-09-12 Enix Corp Video game and recording medium storing program
JP2006255146A (en) * 2005-03-17 2006-09-28 Olympia:Kk Game machine, program for game machine and computer-readable recording medium recorded with program for game machine
CN101789992A (en) * 2009-12-29 2010-07-28 宇龙计算机通信科技(深圳)有限公司 Prompting method of customization information, system and mobile terminal
CN110215690A (en) * 2019-07-11 2019-09-10 网易(杭州)网络有限公司 View angle switch method, apparatus and electronic equipment in scene of game

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110250967A1 (en) * 2010-04-13 2011-10-13 Kulas Charles J Gamepiece controller using a movable position-sensing display device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000245969A (en) * 1999-02-25 2000-09-12 Enix Corp Video game and recording medium storing program
JP2006255146A (en) * 2005-03-17 2006-09-28 Olympia:Kk Game machine, program for game machine and computer-readable recording medium recorded with program for game machine
CN101789992A (en) * 2009-12-29 2010-07-28 宇龙计算机通信科技(深圳)有限公司 Prompting method of customization information, system and mobile terminal
CN110215690A (en) * 2019-07-11 2019-09-10 网易(杭州)网络有限公司 View angle switch method, apparatus and electronic equipment in scene of game

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的驾驶场景关键目标检测与提取;张雪芹等;《华东理工大学学报(自然科学版)》;第45卷(第6期);第980-988页 *

Also Published As

Publication number Publication date
CN113559501A (en) 2021-10-29

Similar Documents

Publication Publication Date Title
CN107913520B (en) Information processing method, information processing device, electronic equipment and storage medium
US11231845B2 (en) Display adaptation method and apparatus for application, and storage medium
CN107977141B (en) Interaction control method and device, electronic equipment and storage medium
KR102649254B1 (en) Display control method, storage medium and electronic device
US20190179411A1 (en) Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device
US9437038B1 (en) Simulating three-dimensional views using depth relationships among planes of content
CN110471596B (en) Split screen switching method and device, storage medium and electronic equipment
US10950205B2 (en) Electronic device, augmented reality device for providing augmented reality service, and method of operating same
CN111530073B (en) Game map display control method, storage medium and electronic device
CN113559501B (en) Virtual unit selection method and device in game, storage medium and electronic equipment
CN107832001B (en) Information processing method, information processing device, electronic equipment and storage medium
JP7005161B2 (en) Electronic devices and their control methods
CN112907760B (en) Three-dimensional object labeling method and device, tool, electronic equipment and storage medium
CN110559647B (en) Control method and device for sight display in virtual shooting game, medium and equipment
CN113546419B (en) Game map display method, game map display device, terminal and storage medium
CN108355352B (en) Virtual object control method and device, electronic device and storage medium
CN110286906B (en) User interface display method and device, storage medium and mobile terminal
US20150371438A1 (en) Computerized systems and methods for analyzing and determining properties of virtual environments
WO2021004413A1 (en) Handheld input device and blanking control method and apparatus for indication icon of handheld input device
CN112965773A (en) Method, apparatus, device and storage medium for information display
CN113457144B (en) Virtual unit selection method and device in game, storage medium and electronic equipment
CN113457117B (en) Virtual unit selection method and device in game, storage medium and electronic equipment
CN107982916B (en) Information processing method, information processing device, electronic equipment and storage medium
JP7005160B2 (en) Electronic devices and their control methods
CN112473138B (en) Game display control method and device, readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant