CN115920368A - Picture control method, device, medium and equipment - Google Patents

Picture control method, device, medium and equipment Download PDF

Info

Publication number
CN115920368A
CN115920368A CN202211604727.1A CN202211604727A CN115920368A CN 115920368 A CN115920368 A CN 115920368A CN 202211604727 A CN202211604727 A CN 202211604727A CN 115920368 A CN115920368 A CN 115920368A
Authority
CN
China
Prior art keywords
virtual
deflection
sub
sliding operation
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211604727.1A
Other languages
Chinese (zh)
Inventor
周琪然
张敏旌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202211604727.1A priority Critical patent/CN115920368A/en
Publication of CN115920368A publication Critical patent/CN115920368A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The method can respond to sliding operation aiming at a touch area in a controller, determine a deflection coefficient based on an action position corresponding to the sliding operation, control the deflection of a picture according to the deflection coefficient and an action range and display a virtual scene picture captured by a deflected virtual lens so as to meet diversified requirements of a user in picture rotation, for example, when the picture deflection is required to be slow, a touch area with a lower deflection coefficient can be triggered, and when the picture deflection is required to be fast, a touch area with a higher deflection coefficient can be triggered.

Description

Picture control method, device, medium and equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a screen control method, a screen control apparatus, a computer-readable storage medium, and an electronic device.
Background
In the role playing game, a user can control a virtual role in the game to execute corresponding actions through terminal equipment or external equipment. The virtual character is usually in a virtual scene of the game, and a user can control the picture to rotate through the external device/terminal device so as to observe the surrounding environment of the virtual character, aim at an enemy unit, search for a target object and the like.
The general way of controlling the rotation of the picture is as follows: the user triggers the touch pad of the gamepad or turns the virtual/physical joystick. However, the user needs different frame rotation ranges for different purposes, and the above method cannot satisfy the diversified needs of the user.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present application and therefore may include information that does not constitute an existing solution known to a person of ordinary skill in the art.
Disclosure of Invention
The present application provides a screen control method, a screen control apparatus, a computer-readable storage medium, and an electronic device, which can determine a deflection coefficient based on an action position corresponding to a sliding operation in response to the sliding operation for a touch area in a controller, and control a screen to deflect and display a virtual scene screen captured by a virtual lens after deflection according to the deflection coefficient and an action range, so as to meet diversified requirements of a user when the screen is rotated, for example, when the screen is required to deflect slowly, a touch area with a lower deflection coefficient can be triggered, and when the screen is required to deflect quickly, a touch area with a higher deflection coefficient can be triggered.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
According to an aspect of the present application, there is provided a picture control method for providing a graphical user interface through a terminal device, the graphical user interface including a picture of a virtual scene captured through a virtual lens, the virtual scene including a virtual object controlled through the terminal device, the method including:
responding to a sliding operation aiming at a touch area in a controller, and determining an action position and an action range of the sliding operation in the touch area, wherein the controller is in communication connection with the terminal equipment;
determining a deflection coefficient to be deflected of the virtual lens according to the action position;
and controlling the virtual lens to deflect according to the deflection coefficient and the action range, and displaying the picture of the virtual scene captured by the deflected virtual lens through a graphical user interface.
In an exemplary embodiment of the present application, a virtual scene includes a virtual object controlled by a terminal device, the virtual object holding a virtual weapon;
in response to a sliding operation with respect to a touch area in the controller, determining that the sliding operation is ahead of an action position and an action range in the touch area, the method further includes:
determining that the virtual weapon enters the aiming state.
In an exemplary embodiment of the present application, the method further comprises:
and dividing the touch area into a plurality of sub-control areas in response to the virtual weapon entering the aiming state.
In an exemplary embodiment of the present application, determining an action position of a sliding operation in a touch area includes:
determining a touch position of an initial touch point of the sliding operation in the touch area;
and determining the touch position as an acting position of the sliding operation in the touch area.
In an exemplary embodiment of the present application, determining an action position of a sliding operation in a touch area includes:
determining a target sub-control area where an initial touch point of the sliding operation is located;
and taking the target sub-control area as an acting position of the sliding operation in the touch control area.
In an exemplary embodiment of the present application, determining a deflection coefficient by which a virtual lens is to be deflected according to an action position includes:
determining deflection coefficients corresponding to the target sub-control areas, wherein different sub-control areas correspond to different deflection coefficients;
and taking the deflection coefficient corresponding to the target sub-control area as the deflection coefficient to be deflected of the virtual lens.
In an exemplary embodiment of the present application, controlling a virtual lens to deflect according to a deflection coefficient and an action range includes:
and controlling the virtual lens to deflect according to the deflection coefficient and the action range in response to the end of the sliding operation.
In an exemplary embodiment of the present application, the deflection coefficient is in a direct proportional relationship with the magnitude of the virtual lens deflection.
In an exemplary embodiment of the present application, the method further comprises:
dividing the touch area into a first sub-control area, a second sub-control area and a third sub-control area, wherein a first deflection coefficient corresponding to the first sub-control area is larger than a second deflection coefficient corresponding to the second sub-control area, and the second deflection coefficient is larger than a third deflection coefficient corresponding to the third sub-control area; the second sub-control area is located in the middle of the first sub-control area and the third sub-control area.
According to an aspect of the present application, there is provided a picture control apparatus for providing a graphic user interface through an apparatus, the graphic user interface including a picture of a virtual scene captured through a virtual lens, the virtual scene including a virtual object controlled through the apparatus, including:
the touch area determination unit is used for responding to sliding operation aiming at a touch area in the controller and determining the action position and the action range of the sliding operation in the touch area, wherein the controller is in communication connection with the terminal equipment;
the deflection coefficient determining unit is used for determining a deflection coefficient to be deflected of the virtual lens according to the action position;
and the picture deflection control unit is used for controlling the virtual lens to deflect according to the deflection coefficient and the action range, and displaying the picture of the virtual scene captured by the deflected virtual lens through a graphical user interface.
In an exemplary embodiment of the present application, a virtual scene includes a virtual object controlled by a terminal device, the virtual object holding a virtual weapon;
the touch area determination unit determines, in response to a sliding operation with respect to a touch area in the controller, that the sliding operation is in front of an action position and an action range in the touch area, and the apparatus further includes:
and the state determination unit is used for determining that the virtual weapon enters the aiming state.
In an exemplary embodiment of the present application, the apparatus further comprises:
and the area dividing unit is used for responding to the virtual weapon entering the aiming state and dividing the touch area into a plurality of sub-control areas.
In an exemplary embodiment of the present application, the determining a position of an action of a sliding operation in the touch area by the touch area determining unit includes:
determining a touch position of an initial touch point of the sliding operation in the touch area;
and determining the touch position as an acting position of the sliding operation in the touch area.
In an exemplary embodiment of the present application, the determining a position of an action of a sliding operation in the touch area by the touch area determining unit includes:
determining a target sub-control area where an initial touch point of the sliding operation is located;
and taking the target sub-control area as an acting position of the sliding operation in the touch control area.
In an exemplary embodiment of the present application, the deflection coefficient determining unit determines a deflection coefficient by which the virtual lens is to be deflected according to the acting position, including:
determining deflection coefficients corresponding to the target sub-control areas, wherein different sub-control areas correspond to different deflection coefficients;
and taking the deflection coefficient corresponding to the target sub-control area as the deflection coefficient to be deflected of the virtual lens.
In an exemplary embodiment of the present application, the picture deflection control unit controls the virtual lens to deflect according to the deflection coefficient and the action range, including:
and controlling the virtual lens to deflect according to the deflection coefficient and the action range in response to the end of the sliding operation.
In an exemplary embodiment of the present application, the deflection coefficient is in a direct proportional relationship with the magnitude of the virtual lens deflection.
In an exemplary embodiment of the application, the area dividing unit is further configured to divide the touch area into a first sub-control area, a second sub-control area and a third sub-control area, where a first deflection coefficient corresponding to the first sub-control area is greater than a second deflection coefficient corresponding to the second sub-control area, and the second deflection coefficient is greater than a third deflection coefficient corresponding to the third sub-control area; the second sub-control area is located in the middle of the first sub-control area and the third sub-control area.
According to an aspect of the application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any one of the above.
According to an aspect of the present application, there is provided an electronic device including: a processor; and a memory for storing executable instructions for the processor; wherein the processor is configured to perform the method of any of the above via execution of the executable instructions.
The exemplary embodiments of the present application may have some or all of the following advantages:
in the screen control method provided in an example embodiment of the present application, in response to a sliding operation for a touch area in a controller, a deflection coefficient may be determined based on an action position corresponding to the sliding operation, and a screen may be controlled to deflect and a virtual scene screen captured by a virtual lens after deflection may be displayed according to the deflection coefficient and an action range, so as to meet a diversified requirement of a user when the screen is rotated.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 schematically illustrates a block diagram of a computer system provided according to an embodiment of the present application;
FIG. 2 schematically illustrates a flow chart of a picture control method according to an embodiment of the present application;
fig. 3 schematically shows an application scenario diagram a according to an embodiment of the application;
FIG. 4 schematically shows an application scenario diagram b according to an embodiment of the present application;
fig. 5 schematically shows a flow chart of a picture control method according to another embodiment of the present application;
fig. 6 schematically shows a block diagram of a picture control apparatus according to an embodiment of the present application;
FIG. 7 illustrates a schematic structural diagram of a computer system suitable for use to implement the electronic device of the embodiments of the subject application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present application.
Furthermore, the drawings are merely schematic illustrations of the present application and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
First, terms referred to in the embodiments of the present application are briefly described:
a virtual environment is a virtual environment displayed (or provided) when an application program runs on a terminal. The virtual environment may be a simulation environment of a real world, a semi-simulation semi-fictional environment, or a pure fictional environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, and the embodiment of the present application is not limited. Optionally, the virtual environment may provide a battle environment for the virtual object. Illustratively, in a large-scale escape type game, at least one virtual object performs a single-play battle in a virtual environment, the virtual object achieves the purpose of surviving in the virtual environment by avoiding attacks initiated by enemy units and dangers (such as poison circle, marshland and the like) existing in the virtual environment, when the life value of the virtual object in the virtual environment is zero, the life of the virtual object in the virtual environment is ended, and finally the virtual object smoothly passing through a route in the checkpoint is the winner. Each client may control one or more virtual objects in the virtual environment.
Wherein a virtual object refers to a movable object in a virtual environment. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters and animals displayed in a three-dimensional virtual environment. Optionally, the virtual object is a three-dimensional volumetric model created based on animated skeletal techniques. Each virtual object has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
Referring to fig. 1, fig. 1 schematically illustrates a block diagram of a computer system provided according to an embodiment of the present application. As shown in fig. 1, the computer system 100 includes: a mobile terminal 120 and a server 110.
The mobile terminal 120 has installed and operated thereon an application program supporting a virtual environment. The application program may be any one of a three-dimensional map program, a military simulation program, a landscape shooting, a landscape adventure, a landscape crossing, a landscape policy, a Virtual Reality (VR) application program, and an Augmented Reality (AR) program. The mobile terminal 120 is a mobile terminal used by a user, and the user uses the mobile terminal 120 to control a master virtual object located in a three-dimensional virtual environment to perform activities, including but not limited to: adjusting at least one of body posture, walking, running, jumping, riding, driving, aiming, picking up, using a throw-like prop, attacking other virtual objects. Illustratively, the master virtual object is a virtual character, such as a simulated character object or an animated character object. Illustratively, the user controls the activity of the master virtual character through a UI control on the virtual environment screen.
The mobile terminal 120 is connected to the server 110 through a wireless network or a wired network.
The server 110 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Illustratively, the server 110 includes a processor 112 and a memory 111, the memory 111 further includes a receiving module 1113, a control module 1111 and a sending module 1112, the receiving module 1113 is configured to receive a request sent by an application program, such as an attack enemy; the control module 1111 is configured to control rendering of a virtual environment picture; the sending module 1112 is configured to send a response to the application, such as sending an attack-inflicted damage value to the application. The server 110 is used to provide background services for applications that support a three-dimensional virtual environment. Alternatively, the server 110 undertakes primary computational tasks and the mobile terminal 120 undertakes secondary computational tasks; alternatively, the server 110 undertakes the secondary computing job and the mobile terminal 120 undertakes the primary computing job.
Optionally, the application program runs on different operating system platforms (android or IOS). Optionally, the types of devices of the mobile terminal 140 running the application are the same or different, and the types of devices include: at least one of a smartphone, a smart watch, a smart television, a vehicle-mounted mobile terminal, a wearable device, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer. The following embodiments are illustrated with the mobile terminal comprising a smartphone.
Those skilled in the art will appreciate that the number of mobile terminals described above may be greater or fewer. For example, the number of the mobile terminals may be only one, or may be tens or hundreds, or more. The number and the device type of the mobile terminals are not limited in the embodiment of the application.
The picture control method in one embodiment of the present disclosure may be executed in a local terminal device or a server. When the screen control method is operated on a server, the method can be implemented and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and a client device.
In an optional embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud games. Taking a cloud game as an example, a cloud game refers to a game mode based on cloud computing. In the cloud game operation mode, the game program operation main body and the game picture presentation main body are separated, the storage and the operation of the picture control method are completed on a cloud game server, and the client device is used for receiving and sending data and presenting game pictures, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud game server which performs information processing is a cloud. When a game is played, a player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are encoded and compressed, the data are returned to the client device through a network, and finally the data are decoded through the client device and the game pictures are output.
In an optional implementation manner, taking a game as an example, the local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with the player through a graphical user interface, namely, a game program is downloaded and installed and operated through an electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the player through holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including a game screen and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
In a possible implementation manner, an embodiment of the present invention provides a screen control method, where a graphical user interface is provided by a terminal device, where the terminal device may be the aforementioned local terminal device, and may also be the aforementioned client device in a cloud interaction system.
Referring to fig. 2, fig. 2 schematically shows a flowchart of a picture control method according to an embodiment of the present application. This method is illustrated as applied to the mobile terminal 120 (or an application on the mobile terminal 120) shown in fig. 1. As shown in fig. 2, the screen control method may provide a graphical user interface through the terminal device, where the graphical user interface includes a screen of a virtual scene captured through a virtual lens, and the virtual scene includes a virtual object controlled by the terminal device, and specifically includes: step S210 to step S230.
Step S210: and responding to the sliding operation of the touch area in the controller, and determining the action position and the action range of the sliding operation in the touch area, wherein the controller is in communication connection with the terminal equipment.
Step S220: and determining a deflection coefficient to be deflected of the virtual lens according to the action position.
Step S230: and controlling the virtual lens to deflect according to the deflection coefficient and the action range, and displaying the picture of the virtual scene captured by the deflected virtual lens through a graphical user interface.
By implementing the method shown in fig. 2, in response to a sliding operation for a touch area in the controller, a deflection coefficient may be determined based on an action position corresponding to the sliding operation, and the deflection of the screen may be controlled according to the deflection coefficient and an action range and a virtual scene captured by the virtual lens after the deflection is displayed, so as to meet diversified requirements of a user when the screen is rotated.
The above steps of the present exemplary embodiment will be described in more detail below.
In step S210, in response to a sliding operation on a touch area in a controller, an action position and an action range of the sliding operation in the touch area are determined, wherein the controller is in communication connection with a terminal device.
Specifically, the controller includes any number of touch areas, and the embodiment of the present application is not limited. In addition, the start touch point corresponding to the sliding operation may be located in the touch area, and the end touch point corresponding to the sliding operation may be located in the touch area or outside the touch area.
Optionally, if the sliding operation slides from the touch area a to the touch area B, the sliding displacement of the sliding operation in the touch area a may be determined, the sliding displacement of the sliding operation in the touch area B may be determined, the current frame is controlled to deflect according to the deflection coefficient, the sliding direction, and the sliding distance of the touch area a, so as to obtain a reference frame, and the current frame is controlled to deflect according to the deflection coefficient, the sliding direction, and the sliding distance of the touch area B, so as to obtain a target frame.
In addition, the action position corresponding to the sliding operation is related to the initial touch point and/or the termination touch point, and the action range corresponding to the sliding operation can be defined and explained through the touch area.
In addition, the number of touch areas through which the sliding operation passes is not limited, and no matter how many areas through which the sliding operation passes, the screen control can be performed based on the above mode, so that the interaction diversity can be improved, and richer game experience can be provided for users.
As an alternative embodiment, the virtual scene includes a virtual object controlled by the terminal device, the virtual object holding a virtual weapon; in response to a sliding operation with respect to a touch area in the controller, determining that the sliding operation is ahead of an action position and an action range in the touch area, the method further includes: determining that the virtual weapon enters the aiming state. Can realize like this carrying out accurate control to aiming of virtual weapon under the state of aiming, promptly, can realize effects such as accurate heart quick travel and slow speed remove, be favorable to promoting the shooting rate of accuracy in the recreation.
Wherein, determining that the virtual weapon enters the aiming state may be understood as displaying an aim on the display screen, so that the user may control the position of the aim by triggering the touch area.
In addition, the controller can be a game pad or a touch screen, and the touch area can be a physical touch pad or a virtual touch pad, so that the application in various scenes can be facilitated, and the technical scheme of the application can be realized no matter a user operates on the terminal device or the game pad, so that the embodiment can expand the application range of the technical scheme of the application. In addition, the method can further comprise the following steps: in response to a sliding operation for an external independent touchpad (e.g., a touchpad separately connected to an electronic device), a touch area corresponding to the sliding operation is determined.
Taking a physical touch pad of a game pad as an example, please refer to fig. 3, and fig. 3 schematically illustrates an application scenario diagram a according to an embodiment of the present application. As shown in fig. 3, the present application can be applied to a game pad, and in a physical touch pad area of the game pad, the game pad can be divided into: region a 310, region B320, and region C330.
The area a 310, the area B320, and the area C330 correspond to different deflection coefficients, and the final distance that the screen needs to be deflected can be obtained by multiplying the sliding distance corresponding to the sliding operation by the different deflection coefficients. The user can trigger the corresponding area according to the self requirement, thereby obtaining different deflection effects.
It should be noted that the area a 310, the area B320, and the area C330 are only exemplary, the dividing manner of the areas is not limited in the present application, the touch panel may be divided into any number, and the dividing manner may be horizontal dividing, vertical dividing (as shown in fig. 3), oblique dividing, and the like.
In addition, the number of the plurality of touch areas divided may depend on the size of the total touchable area, and the method may further include: the total area that can be touched is divided into a plurality of touch areas based on a preset division unit (e.g., 5cm × 5 cm). Wherein the plurality of touch areas correspond to different deflection coefficients. Therefore, the user can conveniently trigger the specific touch area according to the self requirement, and the personalized picture deflection effect is achieved.
As an alternative embodiment, the method further comprises: and dividing the touch area into a plurality of sub-control areas in response to the virtual weapon entering the aiming state. Thus, the controllable precision of each touch area can be improved.
Specifically, each touch area may further include a plurality of sub-control areas, different sub-control areas may also correspond to different deflection coefficients, and the deflection coefficients of the sub-control areas in the touch area may be limited within a coefficient range of the corresponding touch area. The sub control area indicates that the touch area is divided into a plurality of small blocks, and the sub control area is a name for the plurality of small blocks.
As an alternative embodiment, the method further comprises: dividing the touch area into a first sub-control area, a second sub-control area and a third sub-control area, wherein a first deflection coefficient corresponding to the first sub-control area is larger than a second deflection coefficient corresponding to the second sub-control area, and the second deflection coefficient is larger than a third deflection coefficient corresponding to the third sub-control area; the second sub-control area is located in the middle of the first sub-control area and the third sub-control area. Therefore, the deflection coefficient of the first sub-control area, the deflection coefficient of the second sub-control area and the deflection coefficient of the third sub-control area are limited, the incremental relationship of the deflection coefficients among the first sub-control area, the second sub-control area and the third sub-control area is improved, and more accurate picture deflection control is facilitated.
Specifically, the embodiments of the present application are not limited to the shapes of the regions corresponding to the first sub control region, the second sub control region, and the third sub control region, and the present application is not limited to the names of the first sub control region, the second sub control region, and the third sub control region, and does not limit the ascending order of the deflection coefficients of the first sub control region, the second sub control region, and the third sub control region.
As an alternative embodiment, determining the action position of the sliding operation in the touch area includes: determining a touch position of an initial touch point of the sliding operation in the touch area; and determining the touch position as an acting position of the sliding operation in the touch area. Therefore, the action position can be quickly detected, and the action position can be timely determined when the initial touch point is detected, so that the response speed of the deflection of the picture is improved.
Specifically, if a plurality of initial touch points are detected simultaneously, it may be determined that a start touch point is selected from the touch states (e.g., continuous touch, momentary touch, etc.) of the plurality of initial touch points, and the corresponding touch position is determined as the action position of the sliding operation in the touch area.
As an alternative embodiment, determining the action position of the sliding operation in the touch area includes: determining a target sub-control area where an initial touch point of the sliding operation is located; and taking the target sub-control area as an acting position of the sliding operation in the touch control area. Thus, the control precision of the picture deflection can be improved by the provided sub-control area control scheme.
Specifically, the target sub-control area may be any sub-control area in the touch area where the starting touch point is located, and the embodiment of the present application is not limited.
In step S220, a deflection coefficient to be deflected of the virtual lens is determined according to the acting position.
Specifically, the deflection coefficient may be preset, or may be selected based on a current scene, that is, in different scenes, the deflection coefficient corresponding to each touch area may be different.
For example, under a first-person viewing angle, the deflection coefficients corresponding to the touch areas may be 0.2, 0.5, and 0.5, respectively; under a third person's view angle, the deflection coefficients corresponding to the touch areas can be 1, 1.2, and 1.3, respectively; under the open-mirror shooting angle, the deflection coefficients corresponding to the touch areas can be 0.2, 0.5 and 0.5 respectively. The deflection coefficient of the virtual lens to be deflected can be understood as the deflection coefficient of the touch area corresponding to the action position.
As an alternative embodiment, determining the deflection coefficient to be deflected of the virtual lens according to the action position includes: determining deflection coefficients corresponding to the target sub-control areas, wherein different sub-control areas correspond to different deflection coefficients; and taking the deflection coefficient corresponding to the target sub-control area as the deflection coefficient to be deflected of the virtual lens. Therefore, the deflection coefficient of the touch area can be customized by a user according to the self requirement, the interactivity with the user can be improved, and richer interactive experience can be provided.
In addition, optionally, a plurality of controls corresponding to different deflection coefficients can be displayed in response to the sliding operation; in response to a control selection operation, determining a target control from a plurality of controls; determining a target deflection coefficient corresponding to the target control; and setting the deflection coefficient of the touch area as a target deflection coefficient. Therefore, the deflection coefficient of the touch area can be customized by a user according to the self requirement, the interactivity with the user can be improved, and richer interactive experience can be provided.
Wherein, the setting of the deflection coefficient of the touch area as the target deflection coefficient includes: setting a deflection coefficient of a specific touch area in the plurality of touch areas as a target deflection coefficient; or, the deflection coefficients of the multiple touch areas are all set as target deflection coefficients, which is not limited in the embodiments of the present application.
In step S230, the virtual lens is controlled to deflect according to the deflection coefficient and the action range, and a picture of a virtual scene captured by the deflected virtual lens is displayed through the graphical user interface.
As an alternative embodiment, the controlling the virtual lens to deflect according to the deflection coefficient and the action range includes: and controlling the virtual lens to deflect according to the deflection coefficient and the action range in response to the end of the sliding operation. Therefore, an accurate deflection result can be determined based on the deflection coefficient and the action range, the virtual lens deflection is carried out based on the accurate deflection result, and accurate deflection of the picture can be realized.
Wherein the deflection coefficient is in direct proportion to the amplitude of the virtual lens deflection.
Specifically, when the virtual shooting centroid in the current screen is controlled to slide the target sliding distance in the sliding direction, the screen is changed accordingly. When the virtual shooting centroid is not included in the current frame, the center point in the current frame may be controlled to deflect by the target sliding distance in the sliding direction, so as to implement the deflection of the frame, that is, to display the frame of the virtual scene captured by the deflected virtual lens.
Referring to fig. 4, fig. 4 schematically shows an application scenario diagram b according to an embodiment of the present application. As shown in fig. 4, if the game screen includes the virtual shooting center 400, the virtual shooting center 400 can be deflected based on the sliding operation triggered by the user in fig. 4, and the sliding distance corresponding to the sliding operation can be used. In fig. 4, deflection displacements 410, 420, 430 corresponding to the regions a 310, B320, C330, respectively, of fig. 3 are exemplarily shown.
Specifically, if the user triggers a sliding operation in the area a 310, the deflection displacement 410 may be determined according to the sliding distance L and the deflection coefficient of the area a 310; if the user triggers the sliding operation in the area B320, the deflection displacement 420 can be determined according to the sliding distance L and the deflection coefficient of the area B320; if the user triggers a sliding operation in the area C330, the deflection displacement 430 can be determined according to the sliding distance L and the deflection coefficient of the area C330.
Further, referring to fig. 5, fig. 5 schematically shows a flowchart of a picture control method according to another embodiment of the present application. As shown in fig. 5, the screen control method may include: step S510 to step S560.
Step S510: determining that the virtual weapon enters the aiming state.
Step S520: and dividing the touch area into a plurality of sub-control areas in response to the virtual weapon entering the aiming state.
Step S530: responding to the sliding operation aiming at the sub-control area in the controller, determining a target sub-control area where an initial touch point of the sliding operation is located, and taking the target sub-control area as an action position of the sliding operation in the touch area; the controller is in communication connection with the terminal equipment.
Step S540: and determining deflection coefficients corresponding to the target sub-control areas, wherein different sub-control areas correspond to different deflection coefficients.
Step S550: and taking the deflection coefficient corresponding to the target sub-control area as the deflection coefficient to be deflected of the virtual lens.
Step S560: responding to the end of the sliding operation, controlling the virtual lens to deflect according to the deflection coefficient and the action range, and displaying the picture of a virtual scene captured by the deflected virtual lens through a graphical user interface; wherein the deflection coefficient is in direct proportional relation with the amplitude of the virtual lens deflection.
It should be noted that steps S510 to S560 correspond to the steps and the embodiment shown in fig. 2, and for the specific implementation of steps S510 to S560, please refer to the steps and the embodiment shown in fig. 2, which will not be described again.
It can be seen that, by implementing the method shown in fig. 5, in response to a sliding operation for a touch area in a controller, a deflection coefficient may be determined based on an action position corresponding to the sliding operation, and a screen may be controlled to deflect and a virtual scene screen captured by a virtual lens after deflection may be displayed according to the deflection coefficient and an action range, so as to meet diversified requirements of a user when the screen is rotated, for example, when the screen is required to deflect slowly, a touch area with a lower deflection coefficient may be triggered, and when the screen is required to deflect quickly, a touch area with a higher deflection coefficient may be triggered.
Further, referring to fig. 6, fig. 6 schematically illustrates a block diagram of a picture control apparatus according to an embodiment of the present application, through which a graphical user interface including pictures of a virtual scene captured through a virtual lens, the virtual scene including a virtual object controlled through the apparatus, may be provided. As shown in fig. 6, the screen control device 600 may specifically include:
a touch area determination unit 601, configured to determine an action position and an action range of a sliding operation in a touch area in response to the sliding operation for the touch area in a controller, where the controller is in communication connection with a terminal device;
a deflection coefficient determining unit 602, configured to determine a deflection coefficient to be deflected of the virtual lens according to the action position;
and a picture deflection control unit 603, configured to control the virtual lens to deflect according to the deflection coefficient and the action range, and display, through a graphical user interface, a picture of a virtual scene captured by the deflected virtual lens.
It can be seen that, with the implementation of the apparatus shown in fig. 6, in response to a sliding operation on a touch area in a controller, a deflection coefficient may be determined based on an action position corresponding to the sliding operation, and a screen may be controlled to deflect and display a virtual scene screen captured by a virtual lens after deflection according to the deflection coefficient and an action range, so as to meet diversified requirements of a user when the screen is rotated, for example, when the screen is required to deflect slowly, a touch area with a lower deflection coefficient may be triggered, and when the screen is required to deflect quickly, a touch area with a higher deflection coefficient may be triggered.
In an exemplary embodiment of the present application, a virtual scene includes a virtual object controlled by a terminal device, the virtual object holding a virtual weapon;
the touch area determination unit 601 determines that the sliding operation is before the action position and the action range in the touch area in response to the sliding operation with respect to the touch area in the controller, and the apparatus further includes:
and the state determination unit is used for determining that the virtual weapon enters the aiming state.
Therefore, the implementation of the optional embodiment can realize the accurate control of the aiming of the virtual weapon in the aiming state, namely, the effects of accurate and fast movement, slow movement and the like can be realized, and the shooting accuracy in the game can be improved.
In an exemplary embodiment of the present application, the apparatus further comprises:
and the area dividing unit is used for responding to the virtual weapon entering the aiming state and dividing the touch area into a plurality of sub-control areas.
Therefore, the controllable precision of each touch area can be improved by implementing the optional embodiment.
In an exemplary embodiment of the present application, the determining unit 601 for determining an action position of a sliding operation in a touch area includes:
determining a touch position of an initial touch point of the sliding operation in the touch area;
and determining the touch position as an action position of the sliding operation in the touch area.
Therefore, by implementing the optional embodiment, the action position can be quickly detected, and the action position can be timely determined when the initial touch point is detected, so that the response speed of the deflection of the picture is improved.
In an exemplary embodiment of the present application, the determining unit 601 for determining an action position of a sliding operation in a touch area includes:
determining a target sub-control area where an initial touch point of the sliding operation is located;
and taking the target sub-control area as an action position of the sliding operation in the touch control area.
It can be seen that implementing this alternative embodiment, the control accuracy for the picture deflection can be improved by the sub-control region control scheme provided.
In an exemplary embodiment of the present application, the determining unit 602 for determining a deflection coefficient to be deflected by the virtual lens according to the acting position includes:
determining deflection coefficients corresponding to the target sub-control areas, wherein different sub-control areas correspond to different deflection coefficients;
and taking the deflection coefficient corresponding to the target sub-control area as the deflection coefficient to be deflected of the virtual lens.
Therefore, by implementing the optional embodiment, the user can customize the deflection coefficient of the touch area according to the self requirement, the interactivity with the user is promoted, and richer interactive experience is provided.
In an exemplary embodiment of the present application, the picture deflection control unit 603 controls the virtual lens to deflect according to the deflection coefficient and the action range, including:
and controlling the virtual lens to deflect according to the deflection coefficient and the action range in response to the end of the sliding operation.
Wherein the deflection coefficient is in direct proportional relation with the amplitude of the virtual lens deflection.
Therefore, by implementing the optional embodiment, an accurate deflection result can be determined based on the deflection coefficient and the action range, and the virtual lens deflection is performed based on the accurate deflection result, so that the accurate deflection of the picture can be realized.
In an exemplary embodiment of the present application, the zone dividing unit is further configured to divide the touch area into a first sub-control area, a second sub-control area, and a third sub-control area, where a first deflection coefficient corresponding to the first sub-control area is greater than a second deflection coefficient corresponding to the second sub-control area, and the second deflection coefficient is greater than a third deflection coefficient corresponding to the third sub-control area; the second sub-control area is located in the middle of the first sub-control area and the third sub-control area.
It can be seen that, by implementing this alternative embodiment, the incremental relationship of the deflection coefficients between the first sub-control area, the second sub-control area, and the third sub-control area can be improved by defining the deflection coefficients for the first sub-control area, the second sub-control area, and the third sub-control area, thereby facilitating to implement more accurate picture deflection control.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
For details that are not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the screen control method described above for the details that are not disclosed in the embodiments of the apparatus of the present application, because each functional module of the screen control apparatus of the exemplary embodiment of the present application corresponds to a step of the exemplary embodiment of the screen control method described above.
Referring to fig. 7, fig. 7 is a schematic diagram illustrating a computer system suitable for implementing an electronic device according to an embodiment of the present application.
It should be noted that the computer system 700 of the electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the application scope of the embodiments of the present application.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU) 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for system operation are also stored. The CPU 701, ROM 702, and RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to embodiments of the present application, the processes described below with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by a Central Processing Unit (CPU) 701, performs various functions defined in the methods and apparatus of the present application.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
It should be noted that the computer readable medium shown in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (12)

1. A picture control method for providing a graphical user interface through a terminal device, the graphical user interface including a picture of a virtual scene captured through a virtual lens, the virtual scene including a virtual object controlled through the terminal device, the method comprising:
responding to a sliding operation of a touch area in a controller, and determining an action position and an action range of the sliding operation in the touch area, wherein the controller is in communication connection with the terminal equipment;
determining a deflection coefficient to be deflected of the virtual lens according to the action position;
and controlling the virtual lens to deflect according to the deflection coefficient and the action range, and displaying a picture of a virtual scene captured by the deflected virtual lens through the graphical user interface.
2. The method of claim 1, wherein the virtual scene comprises a virtual object controlled by the terminal device, the virtual object holding a virtual weapon;
the response is directed to a sliding operation of a touch area in a controller, and before determining an action position and an action range of the sliding operation in the touch area, the method further comprises:
determining that the virtual weapon enters an aiming state.
3. The method of claim 2, further comprising:
and responding to the virtual weapon entering an aiming state, and dividing the touch area into a plurality of sub-control areas.
4. The method of claim 1, wherein the determining the action position of the sliding operation in the touch area comprises:
determining a touch position of an initial touch point of the sliding operation in the touch area;
determining the touch position as an action position of the sliding operation in the touch area.
5. The method of claim 3, wherein the determining the action position of the sliding operation in the touch area comprises:
determining a target sub-control area where the initial touch point of the sliding operation is located;
and taking the target sub-control area as an acting position of the sliding operation in the touch control area.
6. The method according to claim 5, wherein determining a deflection coefficient by which the virtual lens is to be deflected according to the action position comprises:
determining deflection coefficients corresponding to the target sub-control areas, wherein different sub-control areas correspond to different deflection coefficients;
and taking the deflection coefficient corresponding to the target sub-control area as the deflection coefficient to be deflected of the virtual lens.
7. The method according to claim 1, wherein said controlling said virtual lens to deflect according to said deflection coefficient and said application range comprises:
and controlling the virtual lens to deflect according to the deflection coefficient and the action range in response to the end of the sliding operation.
8. The method of claim 1, wherein the deflection coefficient is directly proportional to a magnitude of the virtual lens deflection.
9. The method of claim 1, further comprising:
dividing the touch area into a first sub-control area, a second sub-control area and a third sub-control area, wherein a first deflection coefficient corresponding to the first sub-control area is greater than a second deflection coefficient corresponding to the second sub-control area, and the second deflection coefficient is greater than a third deflection coefficient corresponding to the third sub-control area; the second sub-control area is located in the middle of the first sub-control area and the third sub-control area.
10. A picture control apparatus, wherein a graphical user interface is provided by the apparatus, the graphical user interface comprising pictures of a virtual scene captured through a virtual lens, the virtual scene comprising a virtual object controlled by the apparatus, comprising:
the touch area determination unit is used for responding to a sliding operation aiming at a touch area in a controller, and determining an action position and an action range of the sliding operation in the touch area, wherein the controller is in communication connection with the terminal equipment;
the deflection coefficient determining unit is used for determining the deflection coefficient to be deflected of the virtual lens according to the action position;
and the picture deflection control unit is used for controlling the virtual lens to deflect according to the deflection coefficient and the action range, and displaying the deflected picture of the virtual scene captured by the virtual lens through the graphical user interface.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 9.
12. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-9 via execution of the executable instructions.
CN202211604727.1A 2022-12-13 2022-12-13 Picture control method, device, medium and equipment Pending CN115920368A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211604727.1A CN115920368A (en) 2022-12-13 2022-12-13 Picture control method, device, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211604727.1A CN115920368A (en) 2022-12-13 2022-12-13 Picture control method, device, medium and equipment

Publications (1)

Publication Number Publication Date
CN115920368A true CN115920368A (en) 2023-04-07

Family

ID=86700457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211604727.1A Pending CN115920368A (en) 2022-12-13 2022-12-13 Picture control method, device, medium and equipment

Country Status (1)

Country Link
CN (1) CN115920368A (en)

Similar Documents

Publication Publication Date Title
CN112090069B (en) Information prompting method and device in virtual scene, electronic equipment and storage medium
CN110141855A (en) Method of controlling viewing angle, device, storage medium and electronic equipment
CN110215685B (en) Method, device, equipment and storage medium for controlling virtual object in game
CN113398601B (en) Information transmission method, information transmission device, computer-readable medium, and apparatus
CN113440846A (en) Game display control method and device, storage medium and electronic equipment
US11810234B2 (en) Method and apparatus for processing avatar usage data, device, and storage medium
CN112717392B (en) Mark display method, device, terminal and storage medium
CN110801629B (en) Method, device, terminal and medium for displaying virtual object life value prompt graph
CN112138385B (en) Virtual shooting prop aiming method and device, electronic equipment and storage medium
CN113209618A (en) Control method, device, equipment and medium of virtual role
US20230330534A1 (en) Method and apparatus for controlling opening operations in virtual scene
US20230321541A1 (en) Displaying visual field picture based on interaction zone on a virtual map
CN112316429A (en) Virtual object control method, device, terminal and storage medium
KR20230145430A (en) Method and device for displaying coordinate axes in a virtual environment, and terminals and media
CN115920368A (en) Picture control method, device, medium and equipment
CN113041616B (en) Method, device, electronic equipment and storage medium for controlling skip word display in game
CN115708956A (en) Game picture updating method and device, computer equipment and medium
CN115068929A (en) Game information acquisition method and device, electronic equipment and storage medium
CN114146414A (en) Virtual skill control method, device, equipment, storage medium and program product
CN113680062A (en) Information viewing method and device in game
KR102557808B1 (en) Gaming service system and method for sharing memo therein
CN111973984A (en) Coordinate control method and device for virtual scene, electronic equipment and storage medium
WO2024037142A1 (en) Movement guidance method and apparatus for virtual object, electronic device, storage medium, and program product
CN115999142A (en) Game control method, device, medium and equipment
US20240091644A1 (en) Virtual object control method and apparatus, device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination