CN116764191A - Virtual scene area selection method and device, storage medium and computer equipment - Google Patents

Virtual scene area selection method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN116764191A
CN116764191A CN202310761253.XA CN202310761253A CN116764191A CN 116764191 A CN116764191 A CN 116764191A CN 202310761253 A CN202310761253 A CN 202310761253A CN 116764191 A CN116764191 A CN 116764191A
Authority
CN
China
Prior art keywords
virtual scene
area
preset sensing
sensing area
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310761253.XA
Other languages
Chinese (zh)
Inventor
赵景辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310761253.XA priority Critical patent/CN116764191A/en
Publication of CN116764191A publication Critical patent/CN116764191A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/22Setup operations, e.g. calibration, key configuration or button assignment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1018Calibration; Key and button assignment
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6045Methods for processing data by generating or executing the game program for mapping control signals received from the input arrangement into game commands

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a method and a device for selecting a virtual scene area, a storage medium and computer equipment, wherein the method comprises the following steps: displaying a graphical user interface comprising a partial virtual scene picture made up of at least a plurality of virtual scene areas; determining a first position where a first preset sensing area is located in response to a first control operation for a first rocker in a gamepad; determining a first moving direction of the first position relative to a central point of a first preset sensing area, and determining a target sub-preset sensing area where the first position is located; acquiring a target area selection frame moving speed corresponding to a target sub-preset sensing area; the control region selection frame moves from the current virtual scene region along the first movement direction according to the movement speed of the target region selection frame so as to select a target virtual scene region from the plurality of virtual scene regions. The control efficiency of the game handle is improved by controlling the area selection frame to move at different moving speeds.

Description

Virtual scene area selection method and device, storage medium and computer equipment
Technical Field
The present application relates to the field of computers, and in particular, to a method and apparatus for selecting a virtual scene area, a computer readable storage medium, and a computer device.
Background
In recent years, with the development and popularization of computer device technology, more and more game applications such as: instant strategy games, role-playing games (RPG), etc.
In the prior art, taking an instant strategy game as an example, when a user plays a game by using a game handle, the user needs to operate a joystick in the game screen to control a region selection frame to move from a selected virtual region to a target virtual region until the region selection frame selects the target virtual region when the user needs to perform a control operation on a certain target virtual region in the game screen.
In the research and practice process of the prior art, the inventor discovers that in the existing game process, if the distance between the currently selected virtual area of the user and the target virtual area is far, the user needs to control the rocker on the game handle for a long time to move the area selection frame to the target virtual area and select the virtual area, so that the control efficiency of high-efficiency control equipment such as a mouse, a keyboard and the like is difficult to achieve, and the control efficiency is low.
Disclosure of Invention
The embodiment of the application provides a method and a device for selecting a virtual scene area, which can improve the control efficiency of a game handle.
In order to solve the technical problems, the embodiment of the application provides the following technical scheme:
a method of selecting a virtual scene area, comprising:
displaying a graphical user interface comprising at least a portion of a virtual scene screen, the virtual scene screen being comprised of a plurality of virtual scene areas;
responding to a first control operation of a first rocker in a game handle, determining a first position of the first rocker in a first preset sensing area, wherein the first preset sensing area at least comprises a first sub-preset sensing area and a second sub-preset sensing area, and the first sub-preset sensing area is embedded in the second sub-preset sensing area;
determining a first moving direction of the first position relative to a center point of the first preset sensing area, and determining a target preset sensing area where the first position is located from the first preset sensing area and the second preset sensing area;
acquiring a target area selection frame moving speed corresponding to the target sub-preset sensing area;
And the control region selection frame moves along the first moving direction from the current virtual scene region according to the moving speed of the target region selection frame so as to select a target virtual scene region from a plurality of virtual scene regions.
A virtual scene area selection apparatus, comprising:
a display module for displaying a graphical user interface, the graphical user interface comprising at least a portion of a virtual scene, the virtual scene being made up of a plurality of virtual scene areas;
the first determining module is used for determining a first position of a first rocker in a game handle in response to a first control operation of the first rocker in a first preset sensing area, wherein the first preset sensing area at least comprises a first sub-preset sensing area and a second sub-preset sensing area, and the first sub-preset sensing area is embedded in the second sub-preset sensing area;
the second determining module is used for determining a first moving direction of the first position relative to a center point of the first preset sensing area and determining a target preset sensing area where the first position is located from the first preset sensing area and the second preset sensing area;
The first acquisition module is used for acquiring the moving speed of the target area selection frame corresponding to the target sub-preset sensing area;
and the first control module is used for controlling the region selection frame to move along the first moving direction from the current virtual scene region according to the moving speed of the target region selection frame so as to select the target virtual scene region from a plurality of virtual scene regions.
A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the above method of selecting a virtual scene area.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method for selecting a virtual scene area as described above when the program is executed.
The embodiment of the application displays a graphical user interface, wherein the graphical user interface comprises at least part of virtual scene pictures, and the virtual scene pictures are formed by a plurality of virtual scene areas; responding to a first control operation of a first rocker in a game handle, determining a first position of the first rocker in a first preset sensing area, wherein the first preset sensing area at least comprises a first sub-preset sensing area and a second sub-preset sensing area, and the first sub-preset sensing area is embedded in the second sub-preset sensing area; determining a first moving direction of the first position relative to a center point of the first preset sensing area, and determining a target preset sensing area where the first position is located from the first preset sensing area and the second preset sensing area; acquiring a target area selection frame moving speed corresponding to the target sub-preset sensing area; and the control region selection frame moves along the first moving direction from the current virtual scene region according to the moving speed of the target region selection frame so as to select a target virtual scene region from a plurality of virtual scene regions. Therefore, the first preset sensing area is partitioned, so that when the first rocker is operated, the control area selection frame moves at different moving speeds based on different sub-preset sensing areas where the first position is, and the control efficiency of the game handle is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1a is a system schematic diagram of a method for selecting a virtual scene area according to an embodiment of the present application.
Fig. 1b is a schematic flow chart of a first method for selecting a virtual scene area according to an embodiment of the present application.
Fig. 1c is a first schematic diagram of a graphical user interface according to an embodiment of the present application.
Fig. 1d is a first schematic diagram of a first preset sensing area according to an embodiment of the present application.
Fig. 1e is a second schematic diagram of a graphical user interface according to an embodiment of the present application.
Fig. 1f is a first schematic diagram of a second preset sensing area according to an embodiment of the present application.
Fig. 1g is a second schematic diagram of a first preset sensing area according to an embodiment of the present application.
FIG. 1h is a second schematic diagram of a second preset sensing region according to an embodiment of the present application
FIG. 1i is a third diagram of a graphical user interface according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a virtual scene area selecting device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
The embodiment of the application provides a virtual scene area selection method, a virtual scene area selection device, a storage medium and computer equipment. Specifically, the method for selecting the virtual scene area according to the embodiment of the present application may be performed by a computer device, where the computer device may be a device such as a terminal or a server. The terminal may be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game console, a personal computer (PC, personal Computer), a personal digital assistant (Personal Digital Assistant, PDA), etc., and the terminal device may further include a client, which may be a game application client, a browser client carrying a game program, or an instant messaging client, etc. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms.
For example, when the selection method of the virtual scene area is run on the terminal, the terminal device stores a game application program and presents a part of the game scene in the game through the display component. The terminal device is used for interacting with a user through a graphical user interface, for example, the terminal device downloads and installs a game application program and runs the game application program. The way in which the terminal device presents the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device, or presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a graphical user interface including game screens and receiving operation instructions generated by a user acting on the graphical user interface, and a processor for running the game, generating the graphical user interface, responding to the operation instructions, and controlling the display of the graphical user interface on the touch display screen.
For example, when the selection method of the virtual scene area is run on a server, it may be a cloud game. Cloud gaming refers to a game style based on cloud computing. In the running mode of the cloud game, a running main body of the game application program and a game picture presentation main body are separated, and the storage and the running of the selection method of the virtual scene area are completed on a cloud game server. The game image presentation is completed at a cloud game client, which is mainly used for receiving and sending game data and presenting game images, for example, the cloud game client may be a display device with a data transmission function, such as a mobile terminal, a television, a computer, a palm computer, a personal digital assistant, etc., near a user side, but a terminal device for selecting a virtual scene area is a cloud game server in the cloud. When playing the game, the user operates the cloud game client to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the cloud game client through a network, and finally decodes the data through the cloud game client and outputs the game pictures.
Referring to fig. 1a, fig. 1a is a system schematic diagram of a virtual scene area selection method according to an embodiment of the present application. The system may include at least one computer device 1000, at least one gamepad 2000, at least one database 3000, and a network 4000. The gamepad 2000 held by the user may be connected to the computer device 1000 by wire, bluetooth or network 4000. Computer device 1000 is any device having computing hardware capable of supporting and executing software products corresponding to a game. In addition, the computer device 1000 has one or more multi-touch sensitive screens for sensing and obtaining input of a user through touch or slide operations performed at multiple points of the one or more touch sensitive display screens. In addition, when the system includes a plurality of computer devices 1000, a plurality of game pads 2000, a plurality of networks 4000, different game pads 2000 may be interconnected with the computer devices 1000 through different bluetooth or networks 4000. The network 4000 may be a wireless network or a wired network, such as a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a cellular network, a 2G network, a 3G network, a 4G network, a 5G network, etc. In addition, the different computer devices 1000 may be connected to other terminals or to a server or the like using their own bluetooth network or hotspot network. For example, multiple users may be online through different computer devices 1000 so as to be connected through an appropriate network and synchronized with each other to support multi-user gaming. In addition, the system may include a plurality of databases 3000, the plurality of databases 3000 being coupled to different computer devices 1000, and information related to the game environment may be continuously stored in the databases 3000 while different users play multi-user games online.
It should be noted that, the system schematic diagram of the selection system of the virtual scene area shown in fig. 1a is only an example, and the selection system and the scene of the virtual scene area described in the embodiment of the present application are for more clearly describing the technical solution of the embodiment of the present application, and do not constitute a limitation to the technical solution provided by the embodiment of the present application, and as a person of ordinary skill in the art can know that, along with the evolution of the selection system of the virtual scene area and the appearance of the new service scene, the technical solution provided by the embodiment of the present application is equally applicable to similar technical problems.
In this embodiment, description will be made from the viewpoint of selection means of a virtual scene area, which may be integrated in a computer device having a storage unit and a microprocessor mounted thereon and having arithmetic capability.
Referring to fig. 1b, fig. 1b is a first flowchart of a method for selecting a virtual scene area according to an embodiment of the application. The selection method of the virtual scene area comprises the following steps:
in step 101, a graphical user interface is displayed, the graphical user interface comprising at least part of a virtual scene picture, the virtual scene picture being made up of a plurality of virtual scene areas.
The three-dimensional virtual scene picture is provided when an application program runs on the computer equipment, and can be a simulation scene of a real world, a semi-simulation and semi-fictitious scene or a pure fictitious scene. The scene picture displayed on the graphical user interface is a scene picture presented when the virtual object observes a three-dimensional virtual scene. The user controls the virtual object in the game scene through the terminal, the virtual object can observe the three-dimensional virtual scene through the virtual lens, taking the instant strategic game as an example, the virtual lens can be positioned above the virtual scene, and partial scenes of the virtual scene are collected in a mode of overlooking the virtual scene, so that partial virtual scene pictures are displayed on the graphical user interface.
Specifically, referring to fig. 1c, fig. 1c is a first schematic diagram of a graphical user interface according to an embodiment of the present application. The graphical user interface is presented by the screen of the computer device 1000, which in an instant strategy game includes a control toolbar formed by a plurality of virtual scene areas (area A, area B, area C, area D, area E, area F, etc.). The control toolbar may include a virtual object type selection bar (virtual object type 10, virtual object type 20, virtual object type 30, etc.), a skill type bar (skill 1, skill 2, etc.), and the like. The user can select the virtual scene area to be controlled currently through the area selection box 40, and control the virtual scene area to be controlled currently through a control type toolbar such as a virtual object type selection bar or a skill type bar in the control toolbar.
For example, if the currently selected virtual scene area of the area selection box 40 is the area D and the user performs the control operation on the virtual object type 10 in the control toolbar, the virtual object of the virtual object type 10 will be set in the area D; or if the user performs a control operation on the skill 1 in the control toolbar, the skill corresponding to the skill 1 is released to the area D.
In step 102, in response to a first control operation for a first rocker in a game handle, a first position of the first rocker in a first preset sensing area is determined, wherein the first preset sensing area at least comprises a first sub-preset sensing area and a second sub-preset sensing area, and the first sub-preset sensing area is embedded in the second sub-preset sensing area.
Fig. 1d is a first schematic diagram of a first preset sensing area according to an embodiment of the present application, as shown in fig. 1 d. When a user performs a first control operation on the joystick on the game handle, the game handle detects a first position of the first joystick controlled by the user on a corresponding first preset sensing area. As shown in fig. 1d, the first rocker is at a first position 60 on its corresponding first predetermined sensing area.
Specifically, in order to improve the control efficiency of the game handle, compared with the prior art, the present application partitions the first preset sensing area 50 corresponding to the first rocker into a first preset sensing area 50 at least including a first sub-preset sensing area 51 and a second sub-preset sensing area 52.
In step 103, a first moving direction of the first position relative to a center point of the first preset sensing area is determined, and a target preset sensing area where the first position is located is determined from the first preset sensing area and the second preset sensing area.
The first sub-preset sensing area 51 and the second sub-preset sensing area 52 obtained after the partitioning are provided with corresponding moving speeds of the area selection frame, and the moving speeds of the area selection frame can be at least one virtual scene area in a unit time. Therefore, in order to know the moving speed of the region selection frame expected by the user, it is necessary to determine whether the first position is in the first sub-preset sensing region or the second sub-preset sensing region, and determine the sub-preset sensing region in which the first position is located as the target sub-preset sensing region.
Specifically, it is also necessary to know the first moving direction of the first position relative to the center point of the first preset sensing area, so that in order to improve the flexibility of the rocker control, the first preset sensing area may be divided into eight directions, north, south, west, east, west, north-east, south-west, and south-east (the above division is only for convenience of explanation, does not represent the actual geographic direction, and may be divided into up, down, left, right, upper left, lower left, upper right, and lower right). If the first moving direction of the first position relative to the center point of the first preset sensing area is southeast, it can be determined that the user expects the area selection frame to move southeast.
For example, if the first position is in the first sub-preset sensing area, the first sub-preset sensing area is the target sub-preset sensing area; if the first position is in the second sub-preset sensing area, the second sub-preset sensing area is the target sub-preset sensing area.
In step 104, a target area selection frame moving speed corresponding to the target sub-preset sensing area is obtained.
The first sub-preset sensing area and the second sub-preset sensing area correspond to different moving speeds of the area selection frame respectively, so that the moving speed of the corresponding target area selection frame can be determined according to whether the target sub-preset sensing area is the first sub-preset sensing area or the second sub-preset sensing area.
In step 105, the control region selection frame moves from the current virtual scene region along the first movement direction according to the target region selection frame movement speed, so as to select a target virtual scene region from a plurality of virtual scene regions.
After determining the moving speed and the first moving direction of the target area selection frame, the area selection frame can be controlled to move along the first moving direction from the current virtual scene area according to the moving speed of the target area selection frame.
Specifically, the manner of selecting the target virtual scene area from the plurality of virtual scene areas may be: and when the first control operation for the first rocker is detected to disappear, determining the virtual scene area selected by the current area selection frame as a target virtual scene area.
For example, fig. 1c and fig. 1e are a second schematic diagram of a graphical user interface according to an embodiment of the present application. When the first moving direction is northeast, the moving speed of the target area selection frame is 1 virtual scene area, and the area selection frame is controlled to move one virtual scene area per unit time along northeast from the current area D. For example, in fig. 1e, the region selection frame moves from region D to region B within one unit time.
In some embodiments, the first control operation is a persistent operation, and after the step of moving the control region selection frame from the current virtual scene region in the first moving direction according to the target region selection frame moving speed, the method further includes:
(1) When the first position is detected to be changed due to the first control operation, determining a changed second position;
(2) Determining a second moving direction of the second position relative to a center point of the first preset sensing area;
(3) Determining whether the second position is in the target preset sensing area or not to obtain a determination result;
(4) And controlling the area selection frame to move based on the second moving direction and the determined result.
The first control operation may be a continuous pushing operation for the first rocker, when the first position is detected to be changed due to the first control operation, that is, the continuous pushing operation of the first rocker by the user is changed, for example, the first rocker is originally pushed to the northeast direction with a high force, and then the first rocker is pushed to the northeast direction with a low force; or subsequently becomes to push the first rocker towards southwest with a small force. A change in the first control operation results in a change in the first position, and thus a determination of the changed second position is required. And re-determining a second moving direction of the second position relative to the center point of the first preset sensing area, wherein the aim of determining the second moving direction is to determine whether the user needs the area selection frame to move towards other directions.
Specifically, after the target sub-preset sensing area is changed from the first position to the second position, for example, the original first position is located in the first sub-preset sensing area, and the second position is located in the second sub-preset sensing area, so that the moving speeds of the area selection frame when the area selection frame moves are different due to the difference of the sub-preset sensing areas. Therefore, it is required to determine whether the second position is in the target preset sensing area where the original first position is, so as to obtain a determination result; and controlling the region selection frame to move based on the second movement direction and the determination result.
In some embodiments, the step of controlling the control area selection box to move based on the second movement direction and the determination result includes:
(1) If the determined result is that the second position is not in the target sub-preset sensing area, acquiring the moving speed of a current area selection frame corresponding to the current sub-preset sensing area in which the second position is located;
(2) And the control region selection frame moves along the second moving direction from the current virtual scene region according to the moving speed of the current region selection frame so as to select a target virtual scene region from a plurality of virtual scene regions.
And if the determined result is that the second position is not in the target sub-preset sensing area where the original first position is, acquiring the moving speed of the current area selection frame corresponding to the current sub-preset sensing area where the second position is.
For example, the target preset sensing area where the first position is located is a first preset sensing area, and the current preset sensing area where the second position is located is the first preset sensing area, so that the moving speed of the current area selection frame is the moving speed of the area selection frame corresponding to the first preset sensing area.
Specifically, after determining the moving speed of the current region selection frame, in combination with the second moving direction, the region selection frame is controlled to move along the second moving direction from the current virtual scene region according to the moving speed of the current region selection frame so as to select a target virtual scene region from a plurality of virtual scene regions.
In some embodiments, the step of selecting the target virtual scene area from the plurality of virtual scene areas comprises:
(1) Controlling the region selection frame to move along the determined moving direction from the current virtual scene region according to the determined moving speed of the region selection frame;
(2) And when the continuous interruption of the first control operation is detected, determining the virtual scene area currently selected by the area selection frame as a target virtual scene area.
The manner of selecting the target virtual scene area from the plurality of virtual scene areas may be: when the region selection frame moves in a certain movement direction according to the movement speed of the region selection frame, if the continuous interruption of the first control operation is detected, the virtual scene region currently selected by the region selection frame is the virtual scene region expected to be selected by the user, so that the first control operation is not needed to be carried out on the first rocker, and the first control operation is interrupted.
For example, as shown in fig. 1c and 1e, when the first moving direction is northeast, and the moving speed of the target area selection frame is 1 virtual scene area, the control area selection frame moves one virtual scene area per unit time from the current area D to northeast. The control region selection frame is moved from the region D in fig. 1c to the northeast direction at a region selection frame moving speed of moving one virtual scene region per unit time, and after 1 unit time, the region selection frame is moved to the region B, and if the first control operation is not interrupted at this time, the movement is continued in the northeast direction. If the partial virtual scene picture in the graphic user interface is not changed along with the region selection frame, at the moment, the region B has no virtual scene region in the north direction, the region B moves to the region C in the east direction, and at the moment, if the first control operation is interrupted, the region C is the target virtual scene region.
Specifically, if a part of the virtual scene images in the graphical user interface are changed along with the region selection frame, the region A, the region B and other regions in the northeast direction of the region C are displayed in the graphical user interface, the northeast direction of the region selection frame is controlled to move all the time until the first control operation is interrupted, and the virtual scene region selected by the current region selection frame is determined as the target virtual scene region.
In some embodiments, the method further comprises:
(1) Determining a third position of a second rocker in a second preset sensing area in response to a second control operation for the second rocker in the game handle;
(2) Determining a third moving direction of the third position relative to a center point of the second preset sensing area;
(3) Judging whether the third moving direction comprises a plurality of directions or not to obtain a judging result;
(4) And controlling the area selection frame to move based on the judging result.
The user can realize the situation that the distance between the current virtual scene area and the target virtual scene area is far relative to the first control operation of the first rocker, and can realize the accurate control in a small range through the second control operation of the second rocker on the game handle when the distance between the current virtual scene area and the target virtual scene area is near.
Specifically, as shown in fig. 1f, fig. 1f is a first schematic diagram of a second preset sensing area according to an embodiment of the present application. The second rocker is similar to the first rocker and also has a corresponding second preset sensing area 70, a third position 80 of the second rocker in the second preset sensing area is shown in the figure, and a third moving direction of the third position 80 relative to a center point of the second preset sensing area is determined, for example, in fig. 1f, the third moving direction of the third position 80 relative to the center point of the second preset sensing area may be in the northeast direction in fig. 1 f.
In order to improve the accurate control in a small range, the second preset sensing area is divided into four directions, namely north, south, west and east. In order to avoid that the third moving direction includes several directions (northwest, northeast, southwest and southeast) due to improper operation of the user, it is necessary to determine whether the third moving direction includes several directions, and a determination result is obtained. And based on the judgment result, controlling the area selection frame to move. That is, in order to avoid misoperation of the user, the second rocker in this embodiment only responds to the second control operation of the user in four preset moving directions (i.e. north, south, west, east, up, down, left, right, etc.), if the user performs the second control operation of the second rocker in other than the four preset moving directions (i.e. the directions), it is necessary to determine which preset moving direction is closest to the moving direction, so that the user performs the second control operation in the preset moving direction as a function of the user.
In some embodiments, the step of controlling the area selection box to move based on the determination result includes:
(1.1) if the third movement direction includes a plurality of directions, determining a target direction in which the third position is close;
(1.2) controlling the region selection frame to move to the next virtual scene region in the target direction, and determining the next virtual scene region as a target virtual scene region.
If the third moving direction includes several directions, the second control operation for the second rocker can only control the area selection frame to move towards the third moving direction including one direction, so that the target direction in which the third position approaches needs to be determined.
For example, in fig. 1f, the distance from the third position 80 to the horizontal direction is L1, the distance to the vertical direction is L2, and since L1 is smaller than L2, the third position 80 is close to the horizontal direction. The forward east direction representing the horizontal right is thus determined as the target direction.
Specifically, the second control operation for setting the movement of the region selection frame with respect to the second rocker to be only a single movement of only one virtual scene region, that is, whether the second control operation is a single operation or a continuous operation, the first control operation only controls the movement of the virtual selection frame to be only one virtual scene region. Therefore, after determining the target direction, the control region selection frame moves to the next virtual scene region toward the target direction, and determines the next virtual scene region as the target virtual scene region.
In some embodiments, the first preset sensing region includes a first preset dead zone, the second preset sensing region includes a second preset dead zone, and a range of the first preset dead zone is smaller than a range of the second preset dead zone.
Fig. 1g is a second schematic diagram of a first preset sensing area provided in an embodiment of the present application, and fig. 1h is a second schematic diagram of a second preset sensing area provided in an embodiment of the present application, as shown in fig. 1g and fig. 1 h. The first preset sensing region 50 includes a first preset dead zone 53 and the second preset sensing region 70 includes a second preset dead zone 71. The dead zone is a zone where the user does not respond when performing the control operation with respect to the joystick, for example, the user performs the second control operation with respect to the second joystick such that the third position is within the second preset dead zone 71, and then does not respond to the second control operation with respect to the second joystick until the third position is outside the second preset dead zone 71.
Specifically, in order to realize the rapid control for the first rocker and the accurate control for the second rocker, dead zones of different ranges of sizes are set for the first rocker and the second rocker. Comparing fig. 1g with fig. 1h, when the user performs the first control operation on the first rocker, the user can easily push the first position out of the first preset dead zone 53, so as to realize quick control; for the second control operation of the user with respect to the second rocker, the third position can be pushed out of the second preset dead zone 71 more difficultly, so that the situation that the user mistakenly touches the second rocker to cause the region selection frame to move is avoided.
In some embodiments, before the step of determining a third position of a second rocker in a gamepad in response to a second control operation for the second rocker, further comprises:
responding to a third control operation for a designated key in the game handle, and screening out candidate virtual scene areas from the virtual scene picture;
the step of moving the control region selection frame to the next virtual scene region in the target direction and determining the next virtual scene region as a target virtual scene region includes:
and the control region selection frame moves to the next candidate virtual scene region towards the target direction, and the next candidate virtual scene region is determined to be the target virtual scene region.
In order to improve the quick control of the special virtual scene area, a third control operation (for example, a pressing operation) may be performed on a designated key (for example, an RT key) in the game handle, and then candidate virtual scene areas may be screened from the virtual scene screen. The candidate virtual scene area may be obtained by dividing the virtual scene area according to the types of the virtual scene areas, for example, the virtual scene area includes a type a, a type B, a type C, and the like, and one type (for example, a type a) is designated as the candidate virtual scene area. Therefore, when the user performs a third control operation on a designated key in the game handle, candidate virtual scene areas can be screened out from the virtual scene images according to the designated type.
Specifically, after the candidate virtual scene areas are screened out, the user selects a control area selection frame from the plurality of candidate virtual scene areas for a second control operation of the second rocker so as to select a target virtual scene area.
As shown in fig. 1i, fig. 1i is a third schematic diagram of a graphical user interface according to an embodiment of the present application. If the region C and the region E are candidate virtual scene regions, and the virtual scene region currently selected by the region selection frame is the region a, when the third movement direction is forward, the region a and the region B located between the regions C in the forward direction of the region a are not candidate virtual scene regions, so that the selection frame can be directly moved from the region a to the region C, and the region C is determined as the target virtual scene region.
Specifically, when selecting the target virtual scene area from the candidate virtual scene areas, the user needs to continuously operate the designated key, for example, long-press operation on the RT key, and if long-press is interrupted, the original manner of selecting the target virtual scene area from each virtual scene area is restored (for example, the target virtual scene area is determined according to the first control operation on the first rocker, and the target virtual scene area is determined according to the second control operation on the second rocker).
In some embodiments, after the step of selecting the candidate virtual scene area from the virtual scene screen in response to the third control operation for the designated key in the gamepad, the method further comprises:
and displaying each candidate virtual scene area in a preset display style in the graphical user interface.
After candidate virtual scene areas are screened out from the virtual scene image, each candidate virtual scene area is displayed in a preset display style to prompt the user of which virtual scene areas in the graphical user interface are candidate virtual scene areas, as shown in fig. 1 i. The preset display style may be a frame of the virtual scene area in fig. 1i, or a highlight display, which is not limited herein.
From the foregoing, it can be seen that, in the embodiment of the present application, a graphical user interface is displayed, where the graphical user interface includes at least a portion of a virtual scene, where the virtual scene is formed by a plurality of virtual scene areas; responding to a first control operation of a first rocker in a game handle, determining a first position of the first rocker in a first preset sensing area, wherein the first preset sensing area at least comprises a first sub-preset sensing area and a second sub-preset sensing area, and the first sub-preset sensing area is embedded in the second sub-preset sensing area; determining a first moving direction of the first position relative to a center point of the first preset sensing area, and determining a target preset sensing area where the first position is located from the first preset sensing area and the second preset sensing area; acquiring a target area selection frame moving speed corresponding to the target sub-preset sensing area; and the control region selection frame moves along the first moving direction from the current virtual scene region according to the moving speed of the target region selection frame so as to select a target virtual scene region from a plurality of virtual scene regions. Therefore, the first preset sensing area is partitioned, so that when the first rocker is operated, the control area selection frame moves at different moving speeds based on different sub-preset sensing areas where the first position is, and the control efficiency of the game handle is improved.
In order to facilitate better implementation of the method for selecting the virtual scene area provided by the embodiment of the application, the embodiment of the application also provides a device based on the method for selecting the virtual scene area. The meaning of the nouns is the same as that in the selection method of the virtual scene area, and specific implementation details can be referred to the description in the embodiment of the method.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a virtual scene area selecting device according to an embodiment of the present application, where the virtual scene area selecting device may include a display module 301, a first determining module 302, a second determining module 303, a first obtaining module 304, a first control module 305, and so on.
A display module 301 for displaying a graphical user interface, the graphical user interface comprising at least a part of a virtual scene, the virtual scene being composed of a plurality of virtual scene areas;
the first determining module 302 is configured to determine, in response to a first control operation for a first rocker in the game paddle, a first position where the first rocker is located in a first preset sensing area, where the first preset sensing area at least includes a first sub-preset sensing area and a second sub-preset sensing area, and the first sub-preset sensing area is embedded in the second sub-preset sensing area;
A second determining module 303, configured to determine a first moving direction of the first position relative to a center point of the first preset sensing area, and determine a target preset sensing area where the first position is located from the first preset sensing area and the second preset sensing area;
a first obtaining module 304, configured to obtain a target area selection frame moving speed corresponding to the target sub-preset sensing area;
the first control module 305 is configured to control the region selection frame to move from the current virtual scene region along the first moving direction according to the moving speed of the target region selection frame, so as to select a target virtual scene region from the multiple virtual scene regions.
In some embodiments, the first control operation is a persistent operation, the apparatus further comprising:
a third determining module configured to determine a changed second position when it is detected that the first position has changed due to the first control operation;
a fourth determining module, configured to determine a second moving direction of the second position relative to a center point of the first preset sensing area;
a fifth determining module, configured to determine whether the second position is in the target preset sensing area, to obtain a determination result;
And the second control module is used for controlling the area selection frame to move based on the second moving direction and the determination result.
In some embodiments, the second control module includes:
the first obtaining submodule is used for obtaining the moving speed of the current region selection frame corresponding to the current sub-preset sensing region where the second position is located if the second position is not located in the target sub-preset sensing region according to the determining result;
and the first control submodule is used for controlling the region selection frame to move along the second moving direction from the current virtual scene region according to the moving speed of the current region selection frame so as to select a target virtual scene region from a plurality of virtual scene regions.
In some embodiments, the first control module 305 and the first control sub-module include:
the second control submodule is used for controlling the region selection frame to move along the determined moving direction from the current virtual scene region according to the determined moving speed of the region selection frame;
and the first determining submodule is used for determining the virtual scene area currently selected by the area selection frame as a target virtual scene area when the persistent interrupt of the first control operation is detected.
In some embodiments, the apparatus further comprises:
a fourth determining module, configured to determine a third position of a second rocker in a second preset sensing area in response to a second control operation for the second rocker in the game handle;
a fifth determining module for determining a third moving direction of the third position relative to a center point of the second preset sensing area;
the judging module is used for judging whether the third moving direction comprises a plurality of directions or not to obtain a judging result;
and the third control module is used for controlling the region selection frame to move based on the judging result.
In some embodiments, the third control module includes:
the second determining submodule is used for determining a target direction in which the third position is close if the third moving direction comprises a plurality of directions;
and the third control submodule is used for controlling the region selection frame to move to the next virtual scene region towards the target direction and determining the next virtual scene region as the target virtual scene region.
In some embodiments, the first preset sensing region includes a first preset dead zone, the second preset sensing region includes a second preset dead zone, and a range of the first preset dead zone is smaller than a range of the second preset dead zone.
In some embodiments, the apparatus further comprises:
the screening module is used for responding to a third control operation for a designated key in the game handle and screening candidate virtual scene areas from the virtual scene images;
and the fourth control module is used for controlling the region selection frame to move to the next candidate virtual scene region towards the target direction and determining the next candidate virtual scene region as the target virtual scene region.
In some embodiments, the apparatus further comprises:
and the preset display style display module is used for displaying each candidate virtual scene area in the graphical user interface in a preset display style.
From the foregoing, it can be seen that, in the embodiment of the present application, a graphical user interface is displayed through the display module 301, where the graphical user interface includes at least a part of a virtual scene, and the virtual scene is formed by a plurality of virtual scene areas; the first determining module 302 determines, in response to a first control operation for a first rocker in the game handle, a first position of the first rocker in a first preset sensing area, where the first preset sensing area at least includes a first sub-preset sensing area and a second sub-preset sensing area, and the first sub-preset sensing area is embedded in the second sub-preset sensing area; the second determining module 303 determines a first moving direction of the first position relative to a center point of the first preset sensing area, and determines a target preset sensing area where the first position is located from the first preset sensing area and the second preset sensing area; the first obtaining module 304 obtains a moving speed of a target area selection frame corresponding to the target sub-preset sensing area; the first control module 305 controls the region selection frame to move from the current virtual scene region in the first movement direction according to the target region selection frame movement speed to select a target virtual scene region from a plurality of virtual scene regions. Therefore, the first preset sensing area is partitioned, so that when the first rocker is operated, the control area selection frame moves at different moving speeds based on different sub-preset sensing areas where the first position is, and the control efficiency of the game handle is improved.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Correspondingly, the embodiment of the application also provides computer equipment, which can be a terminal or a server, wherein the terminal can be terminal equipment such as a smart phone, a tablet personal computer, a notebook computer, a touch screen, a game console, a personal computer (PC, personal Computer), a personal digital assistant (Personal Digital Assistant, PDA) and the like. Fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present application, as shown in fig. 3. The computer device 1000 includes a processor 401 having one or more processing cores, a memory 402 having one or more computer readable storage media, and a computer program stored on the memory 402 and executable on the processor. The processor 401 is electrically connected to the memory 402. It will be appreciated by those skilled in the art that the computer device structure shown in the figures is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The processor 401 is a control center of the computer device 1000, connects various parts of the entire computer device 1000 using various interfaces and lines, and performs various functions of the computer device 1000 and processes data by running or loading software programs and/or modules stored in the memory 402, and calling data stored in the memory 402, thereby performing overall monitoring of the computer device 1000.
In the embodiment of the present application, the processor 401 in the computer device 1000 loads the instructions corresponding to the processes of one or more application programs into the memory 402 according to the following steps, and the processor 401 executes the application programs stored in the memory 402, so as to implement various functions:
displaying a graphical user interface comprising at least a portion of a virtual scene screen, the virtual scene screen being comprised of a plurality of virtual scene areas; responding to a first control operation of a first rocker in a game handle, determining a first position of the first rocker in a first preset sensing area, wherein the first preset sensing area at least comprises a first sub-preset sensing area and a second sub-preset sensing area, and the first sub-preset sensing area is embedded in the second sub-preset sensing area; determining a first moving direction of the first position relative to a center point of the first preset sensing area, and determining a target preset sensing area where the first position is located from the first preset sensing area and the second preset sensing area; acquiring a target area selection frame moving speed corresponding to the target sub-preset sensing area; and the control region selection frame moves along the first moving direction from the current virtual scene region according to the moving speed of the target region selection frame so as to select a target virtual scene region from a plurality of virtual scene regions.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 3, the computer device 1000 further includes: a touch display 403, a radio frequency circuit 404, an audio circuit 405, an input unit 406, and a power supply 407. The processor 401 is electrically connected to the touch display 403, the radio frequency circuit 404, the audio circuit 405, the input unit 406, and the power supply 407, respectively. Those skilled in the art will appreciate that the computer device structure shown in FIG. 3 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components.
The touch display 403 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display screen 403 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of a computer device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a liquid crystal display (LCD, liquid Crystal Display), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 401, and can receive and execute commands sent from the processor 401. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 401 to determine the type of touch event, and the processor 401 then provides a corresponding visual output on the display panel in accordance with the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 403 to realize the input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch-sensitive display 403 may also implement an input function as part of the input unit 406.
In an embodiment of the present application, the processor 401 executes the game application program to generate a graphical user interface on the touch display screen 403, where the virtual scene on the graphical user interface includes at least one skill control area, and the skill control area includes at least one skill control. The touch display 403 is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface.
The radio frequency circuitry 404 may be used to transceive radio frequency signals to establish wireless communications with a network device or other computer device via wireless communications.
The audio circuitry 405 may be used to provide an audio interface between a user and a computer device through speakers, microphones, and so on. The audio circuit 405 may transmit the received electrical signal after audio data conversion to a speaker, where the electrical signal is converted into a sound signal for output; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 405 and converted into audio data, which are processed by the audio data output processor 401 and sent via the radio frequency circuit 404 to, for example, another computer device, or which are output to the memory 402 for further processing. The audio circuit 405 may also include an ear bud jack to provide communication of the peripheral ear bud with the computer device.
The input unit 406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 407 is used to power the various components of the computer device 1000. Alternatively, the power supply 407 may be logically connected to the processor 401 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system. The power supply 407 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 3, the computer device 1000 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
As can be seen from the above, the computer device provided in this embodiment displays a graphical user interface, where the graphical user interface includes at least a part of virtual scene images, and the virtual scene images are formed by a plurality of virtual scene areas; responding to a first control operation of a first rocker in a game handle, determining a first position of the first rocker in a first preset sensing area, wherein the first preset sensing area at least comprises a first sub-preset sensing area and a second sub-preset sensing area, and the first sub-preset sensing area is embedded in the second sub-preset sensing area; determining a first moving direction of the first position relative to a center point of the first preset sensing area, and determining a target preset sensing area where the first position is located from the first preset sensing area and the second preset sensing area; acquiring a target area selection frame moving speed corresponding to the target sub-preset sensing area; and the control region selection frame moves along the first moving direction from the current virtual scene region according to the moving speed of the target region selection frame so as to select a target virtual scene region from a plurality of virtual scene regions. Therefore, the first preset sensing area is partitioned, so that when the first rocker is operated, the control area selection frame moves at different moving speeds based on different sub-preset sensing areas where the first position is, and the control efficiency of the game handle is improved.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer readable storage medium having stored therein a plurality of computer programs that can be loaded by a processor to perform steps in any of the skills control methods provided by the embodiment of the present application. For example, the computer program may perform the steps of:
displaying a graphical user interface comprising at least a portion of a virtual scene screen, the virtual scene screen being comprised of a plurality of virtual scene areas; responding to a first control operation of a first rocker in a game handle, determining a first position of the first rocker in a first preset sensing area, wherein the first preset sensing area at least comprises a first sub-preset sensing area and a second sub-preset sensing area, and the first sub-preset sensing area is embedded in the second sub-preset sensing area; determining a first moving direction of the first position relative to a center point of the first preset sensing area, and determining a target preset sensing area where the first position is located from the first preset sensing area and the second preset sensing area; acquiring a target area selection frame moving speed corresponding to the target sub-preset sensing area; and the control region selection frame moves along the first moving direction from the current virtual scene region according to the moving speed of the target region selection frame so as to select a target virtual scene region from a plurality of virtual scene regions.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The steps in any one of the virtual scene area selection methods provided by the embodiments of the present application can be executed by the computer program stored in the storage medium, so that the beneficial effects that any one of the virtual scene area selection methods provided by the embodiments of the present application can be achieved, which are detailed in the previous embodiments and are not described herein.
The above description of the method, the device, the storage medium and the computer device for selecting a virtual scene area provided by the embodiment of the present application applies specific examples to illustrate the principles and the implementation of the present application, and the above description of the embodiment is only used to help understand the method and the core idea of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (12)

1. A method for selecting a virtual scene area, comprising:
displaying a graphical user interface comprising at least a portion of a virtual scene screen, the virtual scene screen being comprised of a plurality of virtual scene areas;
responding to a first control operation of a first rocker in a game handle, determining a first position of the first rocker in a first preset sensing area, wherein the first preset sensing area at least comprises a first sub-preset sensing area and a second sub-preset sensing area, and the first sub-preset sensing area is embedded in the second sub-preset sensing area;
determining a first moving direction of the first position relative to a center point of the first preset sensing area, and determining a target preset sensing area where the first position is located from the first preset sensing area and the second preset sensing area;
acquiring a target area selection frame moving speed corresponding to the target sub-preset sensing area;
and the control region selection frame moves along the first moving direction from the current virtual scene region according to the moving speed of the target region selection frame so as to select a target virtual scene region from a plurality of virtual scene regions.
2. The method according to claim 1, wherein the first control operation is a persistent operation, and further comprising, after the step of moving the control region selection frame from the current virtual scene region in the first moving direction in accordance with the target region selection frame moving speed:
when the first position is detected to be changed due to the first control operation, determining a changed second position;
determining a second moving direction of the second position relative to a center point of the first preset sensing area;
determining whether the second position is in the target preset sensing area or not to obtain a determination result;
and controlling the area selection frame to move based on the second moving direction and the determined result.
3. The method according to claim 2, wherein the step of controlling the control region selection frame to move based on the second movement direction and the determination result includes:
if the determined result is that the second position is not in the target sub-preset sensing area, acquiring the moving speed of a current area selection frame corresponding to the current sub-preset sensing area in which the second position is located;
And the control region selection frame moves along the second moving direction from the current virtual scene region according to the moving speed of the current region selection frame so as to select a target virtual scene region from a plurality of virtual scene regions.
4. A method of selecting a virtual scene area according to any of claims 1 to 3, wherein the step of selecting a target virtual scene area from a plurality of virtual scene areas comprises:
controlling the region selection frame to move along the determined moving direction from the current virtual scene region according to the determined moving speed of the region selection frame;
and when the continuous interruption of the first control operation is detected, determining the virtual scene area currently selected by the area selection frame as a target virtual scene area.
5. The method of selecting a virtual scene area according to claim 1, further comprising:
determining a third position of a second rocker in a second preset sensing area in response to a second control operation for the second rocker in the game handle;
determining a third moving direction of the third position relative to a center point of the second preset sensing area;
Judging whether the third moving direction comprises a plurality of directions or not to obtain a judging result;
and controlling the area selection frame to move based on the judging result.
6. The method according to claim 5, wherein the step of controlling the region selection frame to move based on the determination result comprises:
if the third moving direction comprises a plurality of directions, determining a target direction in which the third position is close to;
and the control region selection frame moves to the next virtual scene region towards the target direction, and the next virtual scene region is determined to be the target virtual scene region.
7. The method of claim 5, wherein the first preset sensing region includes a first preset dead zone, the second preset sensing region includes a second preset dead zone, and a range of the first preset dead zone is smaller than a range of the second preset dead zone.
8. The method of claim 6, further comprising, prior to the step of determining a third position of a second rocker in a second preset sensing region in response to a second control operation for the second rocker in the gamepad:
Responding to a third control operation for a designated key in the game handle, and screening out candidate virtual scene areas from the virtual scene picture;
the step of moving the control region selection frame to the next virtual scene region in the target direction and determining the next virtual scene region as a target virtual scene region includes:
and the control region selection frame moves to the next candidate virtual scene region towards the target direction, and the next candidate virtual scene region is determined to be the target virtual scene region.
9. The method of selecting a virtual scene area according to claim 8, further comprising, after the step of screening out candidate virtual scene areas from the virtual scene screen in response to a third control operation for a designated key in a game pad:
and displaying each candidate virtual scene area in a preset display style in the graphical user interface.
10. A virtual scene area selecting apparatus, comprising:
a display module for displaying a graphical user interface, the graphical user interface comprising at least a portion of a virtual scene, the virtual scene being made up of a plurality of virtual scene areas;
The first determining module is used for determining a first position of a first rocker in a game handle in response to a first control operation of the first rocker in a first preset sensing area, wherein the first preset sensing area at least comprises a first sub-preset sensing area and a second sub-preset sensing area, and the first sub-preset sensing area is embedded in the second sub-preset sensing area;
the second determining module is used for determining a first moving direction of the first position relative to a center point of the first preset sensing area and determining a target preset sensing area where the first position is located from the first preset sensing area and the second preset sensing area;
the first acquisition module is used for acquiring the moving speed of the target area selection frame corresponding to the target sub-preset sensing area;
and the first control module is used for controlling the region selection frame to move along the first moving direction from the current virtual scene region according to the moving speed of the target region selection frame so as to select the target virtual scene region from a plurality of virtual scene regions.
11. A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps in the method of selecting a virtual scene area according to any of claims 1 to 9.
12. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method for selecting a virtual scene area according to any of claims 1 to 9 when the program is executed.
CN202310761253.XA 2023-06-25 2023-06-25 Virtual scene area selection method and device, storage medium and computer equipment Pending CN116764191A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310761253.XA CN116764191A (en) 2023-06-25 2023-06-25 Virtual scene area selection method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310761253.XA CN116764191A (en) 2023-06-25 2023-06-25 Virtual scene area selection method and device, storage medium and computer equipment

Publications (1)

Publication Number Publication Date
CN116764191A true CN116764191A (en) 2023-09-19

Family

ID=88011248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310761253.XA Pending CN116764191A (en) 2023-06-25 2023-06-25 Virtual scene area selection method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN116764191A (en)

Similar Documents

Publication Publication Date Title
CN113546419B (en) Game map display method, game map display device, terminal and storage medium
CN113350793B (en) Interface element setting method and device, electronic equipment and storage medium
CN113952720A (en) Game scene rendering method and device, electronic equipment and storage medium
CN113101650A (en) Game scene switching method and device, computer equipment and storage medium
CN115040873A (en) Game grouping processing method and device, computer equipment and storage medium
CN113332719B (en) Virtual article marking method, device, terminal and storage medium
CN113332721A (en) Game control method and device, computer equipment and storage medium
WO2024045528A1 (en) Game control method and apparatus, and computer device and storage medium
CN113413600B (en) Information processing method, information processing device, computer equipment and storage medium
CN112245914B (en) Viewing angle adjusting method and device, storage medium and computer equipment
CN112799754B (en) Information processing method, information processing device, storage medium and computer equipment
CN115501581A (en) Game control method and device, computer equipment and storage medium
CN117101121A (en) Game prop repairing method, device, terminal and storage medium
CN115193043A (en) Game information sending method and device, computer equipment and storage medium
CN116764191A (en) Virtual scene area selection method and device, storage medium and computer equipment
CN116999835A (en) Game control method, game control device, computer equipment and storage medium
CN116920384A (en) Information display method and device in game, computer equipment and storage medium
CN117504278A (en) Interaction method, interaction device, computer equipment and computer readable storage medium
CN116920390A (en) Control method and device of virtual weapon, computer equipment and storage medium
CN115581916A (en) Thumbnail map display method and device, electronic equipment and storage medium
CN117942556A (en) Game center adjusting method and device, electronic equipment and readable storage medium
CN116251353A (en) Game identifier display control method, device, equipment and readable storage medium
CN116617660A (en) Map element guiding method, device, terminal and storage medium in game
CN115193046A (en) Game display control method and device, computer equipment and storage medium
CN117919705A (en) Game control method, game control device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination