CN112354181A - Open mirror picture display method and device, computer equipment and storage medium - Google Patents

Open mirror picture display method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112354181A
CN112354181A CN202011377797.9A CN202011377797A CN112354181A CN 112354181 A CN112354181 A CN 112354181A CN 202011377797 A CN202011377797 A CN 202011377797A CN 112354181 A CN112354181 A CN 112354181A
Authority
CN
China
Prior art keywords
mirror
control
virtual scene
picture
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011377797.9A
Other languages
Chinese (zh)
Other versions
CN112354181B (en
Inventor
刘智洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011377797.9A priority Critical patent/CN112354181B/en
Publication of CN112354181A publication Critical patent/CN112354181A/en
Application granted granted Critical
Publication of CN112354181B publication Critical patent/CN112354181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Abstract

The application relates to a method and a device for displaying an open-mirror picture, computer equipment and a storage medium, and relates to the technical field of virtual scenes. The method comprises the following steps: displaying a virtual scene interface; and displaying a mirror opening control at a first position of the virtual scene interface, responding to a first trigger operation in a first area in the virtual scene interface, moving the mirror opening control from the first position to a second position corresponding to the first trigger operation, responding to a first specified operation on the second position, and displaying the mirror opening picture in the virtual scene interface. By the method, the user can directly interact with the mirror opening control moved to the position corresponding to the user operation to display the mirror opening picture, so that the operation required by the user to realize the mirror opening function is simplified, and the human-computer interaction efficiency in the virtual scene interface is improved.

Description

Open mirror picture display method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of virtual scene technologies, and in particular, to a method and an apparatus for displaying an open-mirror picture, a computer device, and a storage medium.
Background
At present, in game applications with virtual items, for example, in first person shooting games, in order to improve the accuracy of firing virtual items of firearms, a sight of a real firearm can be simulated to display a sight picture with a finer view field, so as to improve the accuracy of firing when a user operates the firearm.
In the related art, in order to enable a user to autonomously select whether to enter a shooting interface and display a virtual picture with a smaller visual field range and finer definition, a shooting control exists on a virtual interface of a game application program, and the user can select to click the shooting control to enable a game to display the shooting interface.
However, in the related art, the mirror opening control is often located at a fixed position preset by a game or set by a user through an application program in a self-defined manner, and the human-computer interaction efficiency is low.
Disclosure of Invention
The embodiment of the application provides a method and a device for displaying an open mirror picture, computer equipment and a storage medium, which can simplify the operation of a user for displaying the open mirror picture and improve the human-computer interaction efficiency, and the technical scheme is as follows:
on one hand, a method for displaying an open mirror picture is provided, and the method comprises the following steps:
displaying a virtual scene interface; a mirror opening control is displayed at a first position of the virtual scene interface; the mirror opening control is used for triggering the mirror opening picture to be displayed in the virtual scene interface; the picture for opening the mirror is a picture when aiming is carried out through a virtual aiming device arranged on the virtual item;
responding to a first trigger operation in a first area in the virtual scene interface, and moving the open mirror control from the first position to a second position corresponding to the first trigger operation;
and displaying the open mirror picture in the virtual scene interface in response to a first specified operation on the second position.
In another aspect, a device for displaying an open-mirror picture is provided, where the device is used for a terminal, and the device includes:
the virtual interface display module is used for displaying a virtual scene interface; a mirror opening control is displayed at a first position of the virtual scene interface; the mirror opening control is used for triggering the mirror opening picture to be displayed in the virtual scene interface; the picture for opening the mirror is a picture when aiming is carried out through a virtual aiming device arranged on the virtual item;
the first open mirror control moving module is used for responding to a first trigger operation in a first area in the virtual scene interface and moving the open mirror control from the first position to a second position corresponding to the first trigger operation;
and the open mirror picture display module is used for responding to a first specified operation on the second position and displaying the open mirror picture in the virtual scene interface.
In a possible implementation manner, the open mirror picture display module is configured to,
and in response to the first specified operation being a long press operation executed for the first trigger operation, and the duration of the long press operation reaching a first time threshold, displaying the open mirror picture in the virtual scene interface.
In a possible implementation manner, the open mirror picture display module is configured to,
in response to the first specified operation being a click operation separated from the first trigger operation, the open mirror screen is displayed in the virtual scene interface.
In one possible implementation manner, the first open mirror control moving module is configured to,
and in response to the first trigger operation being a touch sliding operation and the second position being the current touch position of the touch sliding operation, moving the open mirror control from the first position to the second position.
In a possible implementation manner, the first open mirror control moving module is further configured to,
and in response to the first trigger operation being a touch sliding operation, the second position being a current touch position of the touch sliding operation, and a sliding distance of the touch sliding operation being greater than a sliding distance threshold, moving the open mirror control from the first position to the second position.
In a possible implementation manner, the first open mirror control moving module is further configured to,
and adjusting a visual angle corresponding to a scene picture displayed in the virtual scene interface based on the touch sliding operation.
In one possible implementation manner, the first open mirror control moving module includes:
the coordinate information acquisition unit is used for responding to a first trigger operation in a first area in the virtual scene interface and acquiring coordinate information of a second position corresponding to the first trigger operation;
and the coordinate information updating unit is used for updating the coordinate information of the mirror opening control based on the coordinate information of the second position.
In one possible implementation, the apparatus further includes:
the second open mirror control moving module is used for responding to a second trigger operation in a first area in the virtual scene interface and moving the open mirror control from the second position to a third position corresponding to the second trigger operation;
and the first mirror-opening picture closing module is used for responding to a second specified operation of the third position and closing the mirror-opening picture in the virtual scene interface.
In one possible implementation, the apparatus further includes:
a second picture-in-mirror closing module, configured to close the picture in the virtual scene interface in response to a third specified operation on the second location;
wherein the third specified operation is a long press operation executed for the first trigger operation, and the duration of the long press operation reaches a second time threshold; the second time threshold is greater than the first time threshold.
In one possible implementation manner, the first open mirror control moving module includes:
a mobile configuration obtaining unit, configured to obtain control mobile configuration information in response to a first trigger operation in a first region in the virtual scene interface, where the control mobile configuration information is used to indicate whether to allow movement of a position of the open mirror control;
and the mirror opening control moving unit is used for responding to the position where the mirror opening control is allowed to move in response to the control movement configuration information, and moving the mirror opening control from the first position to a second position corresponding to the first trigger operation.
In one possible implementation manner, the first open mirror control moving module includes:
the management interface display unit is used for displaying the control management interface; the control management interface comprises configuration options of the open mirror control;
and the configuration information generating unit is used for responding to the configuration operation of the configuration options of the open mirror control and generating the control movement configuration information.
In one possible implementation, the first area is a right half area of the virtual scene interface.
In another aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the above-mentioned open mirror display method.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the above-mentioned open mirror display method.
In yet another aspect, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to enable the computer device to execute the open mirror picture showing method.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
the method comprises the steps that triggering operation of a user in a virtual scene interface is obtained, a mirror opening control is moved to a position corresponding to the triggering operation, and when the user executes first specified operation on the position where the mirror opening control is located, a mirror opening picture is directly displayed in the virtual scene interface. According to the scheme, the triggering operation of the user in the virtual scene interface is directly acquired, the mirror opening control is moved to the position corresponding to the triggering operation, the user can directly interact with the mirror opening control at the position to display a mirror opening picture at the moment, and the finger or pointer equipment does not need to be moved to the initial position of the mirror opening control to trigger the mirror opening function, so that the operation required by the user for realizing the mirror opening function is simplified, and the human-computer interaction efficiency in the virtual scene interface is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 illustrates a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 2 illustrates a display interface diagram of a virtual scene provided by an exemplary embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for presenting open-mirror pictures according to an exemplary embodiment of the present application;
FIG. 4 illustrates a flowchart of a method for open-mirror display in accordance with an exemplary embodiment of the present application;
FIG. 5 is a schematic view of a virtual interface according to the embodiment shown in FIG. 4;
FIG. 6 is a view of the embodiment shown in FIG. 4, illustrating the following of a mirror opening control;
FIG. 7 is a schematic diagram of a control management interface according to the embodiment shown in FIG. 4;
FIG. 8 is a schematic view of an open mirror image according to the embodiment shown in FIG. 4;
FIG. 9 is a schematic diagram of a triggered position determination process according to the embodiment shown in FIG. 4;
FIG. 10 is a schematic diagram illustrating an open mirror determination flow according to an exemplary embodiment;
FIG. 11 is a block diagram of a picture showing device with open mirror according to an exemplary embodiment of the present application;
FIG. 12 is a block diagram illustrating the structure of a computer device in accordance with an exemplary embodiment;
FIG. 13 is a block diagram illustrating the structure of a computer device according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It is to be understood that reference herein to "a number" means one or more and "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
For ease of understanding, several terms referred to in this application are explained below.
1) Virtual scene
A virtual scene is a virtual scene that is displayed (or provided) when an application program runs on a terminal. The virtual scene can be a simulation environment scene of a real world, can also be a semi-simulation semi-fictional three-dimensional environment scene, and can also be a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are illustrated by way of example, but not limited thereto, in which the virtual scene is a three-dimensional virtual scene. Optionally, the virtual scene may also be used for virtual scene engagement between at least two virtual characters. Optionally, the virtual scene may also be used for a virtual firearm fight between at least two virtual characters. Optionally, the virtual scene may also be used for fighting between at least two virtual characters using a virtual firearm within a target area that may be continually smaller over time in the virtual scene.
Virtual scenes are typically rendered based on hardware (e.g., a screen) in a terminal generated by an application in a computer device, such as a terminal. The terminal can be a mobile terminal such as a smart phone, a tablet computer or an electronic book reader; alternatively, the terminal may be a personal computer device such as a notebook computer or a stationary computer.
2) Virtual object
A virtual object refers to a movable object in a virtual scene. The movable object may be at least one of a virtual character, a virtual animal, a virtual vehicle. Optionally, when the virtual scene is a three-dimensional virtual scene, the virtual object is a three-dimensional stereo model created based on an animated skeleton technique. Each virtual object has its own shape, volume and orientation in the three-dimensional virtual scene and occupies a portion of the space in the three-dimensional virtual scene.
3) Virtual prop
The virtual props refer to props which can be used by virtual objects in a virtual environment, and comprise virtual weapons such as pistols, rifles, sniper guns, daggers, knives, swords and axes which can hurt other virtual objects, and supply props such as bullets, wherein quick clips, sighting scopes, silencers and the like are installed on the appointed virtual weapons, and can provide virtual pendants added with partial attributes for the virtual weapons, and defense props such as shields, armors and armored cars.
4) First person shooting game
The first-person shooting game is a shooting game that a user can play from a first-person perspective, and a screen of a virtual environment in the game is a screen that observes the virtual environment from a perspective of a first virtual object. In the game, at least two virtual objects carry out a single-game fighting mode in a virtual environment, the virtual objects achieve the purpose of survival in the virtual environment by avoiding the injury initiated by other virtual objects and the danger (such as poison circle, marshland and the like) existing in the virtual environment, when the life value of the virtual objects in the virtual environment is zero, the life of the virtual objects in the virtual environment is ended, and finally the virtual objects which survive in the virtual environment are winners. Optionally, each client may control one or more virtual objects in the virtual environment, with the time when the first client joins the battle as a starting time and the time when the last client exits the battle as an ending time. Optionally, the competitive mode of the battle may include a single battle mode, a double group battle mode or a multi-person group battle mode, and the battle mode is not limited in the embodiment of the present application.
FIG. 1 illustrates a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application. The implementation environment may include: a first terminal 110, a server 120, and a second terminal 130.
The first terminal 110 is installed and operated with an application 111 supporting a virtual environment, and the application 111 may be a multiplayer online battle program. When the first terminal runs the application 111, a user interface of the application 111 is displayed on the screen of the first terminal 110. The application 111 may be any one of military Simulation programs, Multiplayer Online Battle Arena Games (MOBA), large-escape shooting Games, and Simulation strategy Games (SLG). In the present embodiment, the application 111 is an FPS (First Person shooter Game) for example. The first terminal 110 is a terminal used by the first user 112, and the first user 112 uses the first terminal 110 to control a first virtual object located in the virtual environment for activity, where the first virtual object may be referred to as a master virtual object of the first user 112. The activities of the first virtual object include, but are not limited to: adjusting at least one of body posture, crawling, walking, running, riding, flying, jumping, driving, picking, shooting, attacking, throwing, releasing skills. Illustratively, the first virtual object is a first virtual character, such as a simulated character or an animation character.
The second terminal 130 is installed and operated with an application 131 supporting a virtual environment, and the application 131 may be a multiplayer online battle program. When the second terminal 130 runs the application 131, a user interface of the application 131 is displayed on the screen of the second terminal 130. The client may be any one of a military simulation program, an MOBA game, a large fleeing and killing shooting game, and an SLG game, and in this embodiment, the application 131 is an FPS game as an example. The second terminal 130 is a terminal used by the second user 132, and the second user 132 uses the second terminal 130 to control a second virtual object located in the virtual environment to perform an activity, where the second virtual object may be referred to as a master virtual character of the second user 132. Illustratively, the second virtual object is a second virtual character, such as a simulated character or an animation character.
Optionally, the first virtual object and the second virtual object are in the same virtual world. Optionally, the first virtual object and the second virtual object may belong to the same camp, the same team, the same organization, a friend relationship, or a temporary communication right. Alternatively, the first virtual object and the second virtual object may belong to different camps, different teams, different organizations, or have a hostile relationship.
Optionally, the applications installed on the first terminal 110 and the second terminal 130 are the same, or the applications installed on the two terminals are the same type of application on different operating system platforms (android or IOS). The first terminal 110 may generally refer to one of a plurality of terminals, and the second terminal 130 may generally refer to another of the plurality of terminals, and this embodiment is only illustrated by the first terminal 110 and the second terminal 130. The device types of the first terminal 110 and the second terminal 130 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only two terminals are shown in fig. 1, but there are a plurality of other terminals that may access the server 120 in different embodiments. Optionally, one or more terminals are terminals corresponding to the developer, a development and editing platform for supporting the application program in the virtual environment is installed on the terminal, the developer can edit and update the application program on the terminal and transmit the updated application program installation package to the server 120 through a wired or wireless network, and the first terminal 110 and the second terminal 130 can download the application program installation package from the server 120 to update the application program.
The first terminal 110, the second terminal 130, and other terminals are connected to the server 120 through a wireless network or a wired network.
The server 120 includes at least one of a server, a server cluster composed of a plurality of servers, a cloud computing platform, and a virtualization center. The server 120 is used to provide background services for applications that support a three-dimensional virtual environment. Optionally, the server 120 undertakes primary computational work and the terminals undertake secondary computational work; alternatively, the server 120 undertakes the secondary computing work and the terminal undertakes the primary computing work; alternatively, the server 120 and the terminal perform cooperative computing by using a distributed computing architecture.
In one illustrative example, the server 120 includes a memory 121, a processor 122, a user account database 123, a combat services module 124, and a user-oriented Input/Output Interface (I/O Interface) 125. The processor 122 is configured to load an instruction stored in the server 120, and process data in the user account database 123 and the combat service module 124; the user account database 123 is configured to store data of a user account used by the first terminal 110, the second terminal 130, and other terminals, such as a head portrait of the user account, a nickname of the user account, a fighting capacity index of the user account, and a service area where the user account is located; the fight service module 124 is used for providing a plurality of fight rooms for the users to fight, such as 1V1 fight, 3V3 fight, 5V5 fight and the like; the user-facing I/O interface 125 is used to establish communication with the first terminal 110 and/or the second terminal 130 through a wireless network or a wired network to exchange data.
The virtual scene may be a three-dimensional virtual scene, or the virtual scene may also be a two-dimensional virtual scene. Taking the example that the virtual scene is a three-dimensional virtual scene, please refer to fig. 2, which shows a schematic view of a display interface of the virtual scene according to an exemplary embodiment of the present application. As shown in fig. 2, the display interface of the virtual scene includes a scene screen 200, and the scene screen 200 includes a currently controlled virtual object 210, an environment screen 220 of the three-dimensional virtual scene, and a virtual object 240. The virtual object 240 may be a virtual object controlled by a user or a virtual object controlled by an application program corresponding to other terminals.
In fig. 2, the currently controlled virtual object 210 and the virtual object 240 are three-dimensional models in a three-dimensional virtual scene, and the environment picture of the three-dimensional virtual scene displayed in the scene picture 200 is an object observed from the perspective of the currently controlled virtual object 210, for example, as shown in fig. 2, the environment picture 220 of the three-dimensional virtual scene displayed from the perspective of the currently controlled virtual object 210 is the ground 224, the sky 225, the horizon 223, the hill 221, and the factory building 222.
The currently controlled virtual object 210 may release skills or use virtual props, move and execute a specified action under the control of the user, and the virtual object in the virtual scene may show different three-dimensional models under the control of the user, for example, a screen of the terminal supports touch operation, and a scene screen 200 of the virtual scene includes a virtual control, so that when the user touches the virtual control, the currently controlled virtual object 210 may execute the specified action in the virtual scene and show a currently corresponding three-dimensional model.
Fig. 3 shows a flowchart of a method for displaying a picture in open view according to an exemplary embodiment of the present application. The open mirror picture display method can be executed by computer equipment, and the computer equipment can be a terminal or a server, or the computer equipment can also comprise the terminal and the server. As shown in fig. 3, the open mirror picture display method includes:
step 301, displaying a virtual scene interface; a mirror opening control is displayed at a first position of the virtual scene interface; the mirror opening control is used for triggering to display a mirror opening picture in the virtual scene interface; the picture of opening the mirror is the picture when aiming is carried out through a virtual aiming device arranged on the virtual item.
In one possible implementation manner, the virtual scene interface includes a virtual scene picture, and the virtual scene picture is a picture of a virtual scene observed from a perspective of a virtual object.
In a possible implementation manner, the orientation of the virtual object in the virtual scene may be a direction in which the virtual object faces, and taking the virtual scene as a three-dimensional virtual scene as an example, the computer device may determine the orientation of the virtual object by acquiring a projection direction of the virtual object in a horizontal direction in a three-dimensional coordinate system, for example, when the virtual object is in a standing, squatting, jumping, or floating posture in the virtual scene, the orientation of the virtual object is directly in front of the virtual object facing. Alternatively, the orientation of the virtual object may be a direction from the foot of the virtual object to the head of the virtual object, for example, when the virtual object is in a posture of lying on the stomach, swimming, flying, or the like, the orientation of the virtual object is a projection direction in a horizontal direction in the three-dimensional horizontal coordinate system from the direction from the foot of the virtual object to the head of the virtual object. When the virtual scene is a two-dimensional virtual scene, the orientation of the virtual object is the facing direction of the virtual object. The embodiment of the present application takes the virtual scene as a three-dimensional virtual scene as an example for explanation.
In a possible implementation manner, the virtual scene interface further includes at least one of a firing control, a jumping control, a moving control, and a prone control.
In response to receiving the triggering operation of the firing control, the virtual object performs the firing operation on the target position through a virtual prop (namely, a virtual firearm) in the hand of the virtual object; the jump control is used for triggering the jump operation of the virtual object; the mobile control is used for triggering and controlling the virtual object to move towards the direction corresponding to the control sliding information according to the control sliding information corresponding to the mobile control; the prone control is used for triggering the prone operation of the virtual object;
in a possible implementation manner, the prone control can also control the virtual object to perform different operations through different operation manners of the user. For example, when a single-click operation of the prone control by a user is received, the virtual object is controlled to execute a squatting operation; when receiving a long-time pressing operation of the user on the prone control, controlling the virtual object to execute the prone operation; wherein the long press operation is an operation in which the user triggers the control for a duration greater than a trigger threshold.
Step 302, in response to a first trigger operation on a first area in the virtual scene interface, moving the open mirror control from the first position to a second position corresponding to the first trigger operation.
In a possible implementation manner, when the terminal is a mobile device, the first trigger operation may be a pressing operation of the first trigger position by a user.
In one possible implementation, when the terminal is a mobile terminal having a 3D-touch (three-dimensional touch) function, the pressing operation may be a tap operation or a heavy-press operation.
In one possible implementation, when the terminal is a PC (Personal Computer), the pressing operation may be a clicking operation triggered by a user based on a pointing device such as a mouse or a touch pad.
Step 303, in response to the first designation operation on the second position, displaying the open mirror picture in the virtual scene interface.
In one possible implementation, the open mirror is displayed on a central area of the virtual scene interface in response to the first designation operation of the second position.
In one possible implementation manner, when the open mirror screen is displayed on the virtual scene interface in response to the triggering operation of the open mirror control, the open mirror screen is displayed in the central area of the virtual scene interface, and the open mirror control of the virtual scene interface is still displayed at the second position of the virtual scene interface. When the mirror opening control is triggered, the mirror opening picture corresponding to the mirror opening control and the mirror opening control corresponding to the second position are displayed on the mirror opening virtual scene interface at the same time.
In a possible implementation manner, at least one of a firing control, a jumping control, a moving control and a prone control is further included in the first position; and when responding to the triggering operation of the mirror-opening control, the mirror-opening control is displayed on the second position, and at least one of the firing control, the jumping control, the moving control and the prone control is displayed on the first position.
In another possible implementation manner, receiving a trigger operation of a user, and displaying the mirror-opening picture on the virtual scene interface in a covering manner; and acquiring a second position of the mirror opening picture according to the second position of the virtual scene interface, and displaying the mirror opening control at the second position of the mirror opening picture.
When the mirror-opening picture is displayed as a complete display picture overlaid on the virtual scene interface, because the picture size, i.e., the resolution, of the complete display picture in the display device should be the same, the position information of the second position on the mirror-opening picture can be determined according to the position information of the first position in the virtual scene interface and the position information of the second position in the virtual scene interface, the mirror-opening control is displayed at the second position corresponding to the position information, and at least one of the fire-opening control, the jump control, the movement control and the prone control is displayed at the first position.
In a possible implementation manner, the display position of the open mirror control is determined according to the position information corresponding to the open mirror control.
The display position of the open mirror control can be a position irrelevant to the display picture, because in the game application program, when the game picture overlapped with the control is displayed, the game picture is rendered through a GPU (Graphics Processing Unit) of the equipment and displayed on the game display equipment, and meanwhile, the control overlapped on the game picture is rendered through a CPU (Central Processing Unit) of the equipment, the game control is overlapped on the game picture while the game picture is displayed, therefore, when the open mirror control is triggered and the open mirror picture is displayed on the virtual scene interface, because the control rendered and displayed by the CPU is overlapped on the image picture rendered and displayed by the GPU, the display picture can not change the display position and the display mode of the open mirror control under the condition that the rendering setting of the CPU is not changed, the display position of the mirror-opening control is only related to the position information corresponding to the control.
In summary, according to the method for displaying the open mirror picture provided by the application, the trigger operation of the user in the virtual scene interface is obtained, the open mirror control is moved to the position corresponding to the trigger operation, and when the user executes the first specified operation on the position where the open mirror control is located, the open mirror picture is directly displayed in the virtual scene interface. According to the scheme, the triggering operation of the user in the virtual scene interface is directly acquired, the mirror opening control is moved to the position corresponding to the triggering operation, the user can directly interact with the mirror opening control at the position to display a mirror opening picture at the moment, and the finger or pointer equipment does not need to be moved to the initial position of the mirror opening control to trigger the mirror opening function, so that the operation required by the user for realizing the mirror opening function is simplified, and the human-computer interaction efficiency in the virtual scene interface is improved.
Fig. 4 is a flowchart illustrating a method for presenting a picture for opening a mirror according to an exemplary embodiment of the present application. The virtual screen display method may be executed by a computer device, where the computer device may be a terminal or a server, or the computer device may include the terminal and the server. As shown in fig. 4, the virtual screen display method includes:
step 401, displaying a virtual scene interface.
In a possible implementation manner, the virtual scene interface further includes a virtual firearm held by a virtual object corresponding to the virtual scene interface.
When the virtual scene interface includes a virtual picture of the virtual scene acquired by taking the virtual character as the first view angle, a virtual camera is usually arranged at the shoulder position of the virtual character in the virtual scene, and a three-dimensional picture acquired by the virtual camera is taken as the virtual scene picture captured by the first view angle of the virtual character.
In one possible implementation, the center region of the virtual scene interface contains a corresponding isocenter scale of the virtual firearm.
The collimation center scale is displayed in the center area of the virtual scene picture captured by the virtual character in the first visual angle, so that the opening accuracy can be improved to a certain degree when the user controls the virtual character in the state of not opening the mirror, and the user experience is improved.
In a possible implementation manner, the virtual screen further includes a first area and a second area.
The first area is used for triggering the view angle rotation corresponding to the virtual role according to a first designated operation of a user.
In a possible implementation manner, a view angle control is displayed in the first area, when a user performs a sliding operation on the view angle control, the terminal acquires a sliding direction and a sliding size of the sliding operation corresponding to the view angle control, and determines an angle rotation size of a virtual camera corresponding to the virtual character according to the sliding direction and the sliding size, so as to acquire a virtual scene picture captured after the view angle of the virtual camera is rotated.
In another possible implementation manner, a hidden view angle control exists in the first area, when a user performs a sliding operation in the first area and at a position where no other control exists, the view angle control captures a sliding direction and a sliding size of the sliding operation of the user, and determines an angle rotation size of a virtual camera corresponding to the virtual character according to the sliding direction and the sliding size, so as to obtain a virtual scene picture captured after the view angle of the virtual camera is rotated.
The second area is used for triggering the movement operation corresponding to the virtual role according to the first designated operation.
In a possible implementation manner, a mobile control is displayed in the second area, and the virtual character is controlled to execute the mobile operation in the direction corresponding to the trigger position according to the trigger position of the continuous trigger operation of the user on the mobile control.
In a possible implementation manner, a mobile control is displayed in the second area, and the moving direction in which the virtual character performs the moving operation and the duration time in which the virtual character performs the moving operation are respectively determined according to the sliding direction corresponding to the sliding operation of the user on the mobile control and the operation time of the continuous pressing operation after the sliding operation.
In one possible implementation, the first area is a right half area of the virtual scene interface.
Please refer to fig. 5, which illustrates a schematic view of a virtual interface according to an embodiment of the present application.
The first area is a right half area 501 of the virtual scene interface, and correspondingly, the second area is a left half area 502 of the virtual scene interface; a user can realize the view angle rotation operation of the virtual role by the right half area of the virtual scene interface, so that the right half area of the virtual scene interface is also called as a rotating area of the virtual scene interface; the user can also move the virtual character through the left half area of the virtual scene interface, so the left half area of the virtual scene interface is also called as the moving area of the virtual scene interface. The rotating area 501 further includes a mirror opening control 503, and a user can perform a trigger operation on the mirror opening control to display a mirror opening picture.
The logic of judging whether the area triggered by the user rotates is that the area triggered by the user is only required to be pressed down by the fact that the abscissa of the position is larger than the position of the center of the screen, namely the coordinates (x, y) of the position triggered by the user are obtained, wherein x is the abscissa and y is the ordinate, when the position triggered by the user is judged to be larger than screen.width/2, the area triggered by the user can be regarded as the rotating area, screen.width is the abscissa of the rendering resolution of the game interface, screen.width/2 is the central value of the abscissa of the game interface, and when x is larger than screen.width/2, the position triggered by the user is in the right side area of the game interface.
Step 402, in response to a first trigger operation on a first area in the virtual scene interface, moving the open mirror control from the first position to a second position corresponding to the first trigger operation.
And the first position is the position for showing the open mirror control in the virtual scene interface.
In a possible implementation manner, after the mirror-opening control is moved from the first position to a second position corresponding to a first trigger operation in response to the first trigger operation on a first area in the virtual scene interface, the second position is acquired as a first position where the mirror-opening control is shown.
In one possible implementation manner, in response to that the first trigger operation is a touch operation and the second position is a current touch position of the touch operation, the mirror-opening control is moved from the first position to the second position.
In a possible implementation manner, when the first trigger operation is a touch operation, that is, the terminal receives a touch signal generated in the first area, acquires a second position corresponding to the touch operation according to the touch signal, and moves the mirror opening control from the first position to the second position, where the first position is different from the second position.
Because the first position is the current position of the mirror opening control, if the terminal receives the trigger operation of the user to the first position, the terminal determines that the user executes the trigger operation to the mirror opening control at the moment, and directly displays the mirror opening picture corresponding to the mirror opening control on the virtual scene interface according to the trigger operation.
In one possible implementation manner, in response to that the first trigger operation is a touch slide operation and the second position is a current touch position of the touch slide operation, the mirror-opening control is moved from the first position to the second position.
When the first trigger operation is a touch sliding operation, the terminal acquires a real-time touch position (namely a current touch position) of the touch sliding operation in real time, determines the current touch position as a second position, and moves the mirror opening control from the first position to the second position.
The touch sliding operation is a continuous operation process, the terminal can continuously receive a touch sliding operation signal triggered by a user within the duration of triggering the touch sliding operation by the user, and the terminal continuously moves the mirror opening control according to the touch sliding operation of the user according to the corresponding trigger position of the touch sliding operation signal in real time, so that the mirror opening control can move along with the touch sliding operation of the user all the time in the process of triggering the touch sliding operation by the user; and when the user stops the touch sliding operation, the terminal updates the final position of the touch sliding operation to a second position and moves the mirror opening control to the second position.
In one possible implementation manner, in response to that the first trigger operation is a touch slide operation, the second position is a current touch position of the touch slide operation, and a slide distance of the touch slide operation is greater than a slide distance threshold, the mirror-opening control is moved from the first position to the second position.
When the sliding distance of the touch sliding operation is larger than the threshold value, the terminal acquires a current touch position corresponding to the touch sliding operation with the sliding larger than the threshold value, acquires the current touch position as a second position, and moves the mirror opening control from the first position to the second position.
When the sliding distance of the touch operation sliding is smaller than the threshold, the touch operation sliding triggered by the user may be an operation perturbation value caused by non-subjective operations such as hand shaking of the user, and at this time, the touch sliding operation is ignored, and the mirror opening control is continuously kept at the first position corresponding to the mirror opening control.
Please refer to fig. 6, which illustrates a following schematic diagram of a mirror opening control according to an embodiment of the present application. As shown in fig. 6, when the user does not perform the first trigger operation on the first area, that is, the terminal does not receive the first trigger operation and a signal corresponding to the first trigger operation, the open mirror control is located at a first position 601; when the user executes the first trigger operation, the open mirror control moves from the first position 601 to the second position 602 corresponding to the first trigger operation.
When the first trigger operation is a touch operation, namely the user touches the second position 602, the open mirror control moves from the first position 601 to the second position 602 directly; when the user performs a touch sliding operation based on the second position 602, that is, the user touches the display device and slides along the track 603 in the first area, the mirror-opening control that has moved to the second position 602 at this time moves along the track 603 of the touch sliding operation of the user, and reaches the current touch position corresponding to the touch sliding operation.
In a possible implementation manner, based on the touch sliding operation, a viewing angle corresponding to a scene picture displayed in the virtual scene interface is adjusted.
Because the area where the touch sliding operation is located is the first area, when the first trigger operation is the touch sliding operation, the virtual scene interface also executes corresponding view angle rotation operation along with the touch sliding operation.
In a possible implementation manner, a hidden view angle control exists in the first area, when a user performs a touch sliding operation in the first area and at a position where no other control exists, the terminal acquires a sliding direction and a sliding position of the touch sliding operation, and determines an angle rotation size of a virtual camera corresponding to a virtual character according to the sliding direction and the sliding position while moving the mirror opening control according to the sliding position, so as to acquire a virtual scene picture captured after the view angle of the virtual camera rotates. That is to say, the user may perform a touch sliding operation at a position where the control is not displayed in the first area of the virtual scene interface, and at the same time, the rotation of the user view angle and the movement of the open mirror control are realized.
In a possible implementation manner, in response to a first trigger operation in a first area in the virtual scene interface, obtaining coordinate information of a second position corresponding to the first trigger operation; and updating the coordinate information of the mirror opening control based on the coordinate information of the second position.
In a possible implementation manner, when the first trigger operation is a touch operation, that is, when the terminal receives a touch signal corresponding to a touch operation of a user on a second position on the terminal display device, acquiring the position corresponding to the touch operation as the second position, and acquiring coordinate information corresponding to the touch operation as coordinate information of the second position; and updating the coordinate information of the mirror opening control based on the coordinate information of the second position.
That is to say, when the first trigger operation is a touch operation, the position information corresponding to the touch operation is directly acquired, and the coordinate information of the mirror-opening control is updated according to the position information to realize the movement of the mirror-opening control.
In another possible implementation manner, when the first trigger operation is a touch sliding operation, that is, when the terminal receives a series of continuous touch signals of touch operations at different positions on the terminal display device by the user, the current touch position corresponding to the touch sliding operation is obtained as the second position, and the coordinate information of the mirror-opening control is updated based on the coordinate information of the second position.
The current touch position corresponding to the touch sliding operation of the user may be the final position of the current touch sliding operation, that is, the touch sliding operation is stopped when the user reaches the current touch position, and at this time, the coordinate information of the mirror-opening control is kept unchanged, that is, the mirror-opening control is kept unchanged; when the current touch position corresponding to the touch sliding operation of the user is not the final position of the current touch sliding operation, namely after the user reaches the current touch position, the user still gradually reaches other positions in the first area through the touch sliding operation as time goes on, the current touch position obtained before is updated to be the first position of the operation control, the position corresponding to the touch sliding operation at the moment is updated to be the second position, and the open mirror control is moved from the updated first position to the updated second position according to the coordinate information corresponding to the updated second position.
In one possible implementation manner, in response to that the current touch position corresponding to the touch and slide operation of the user is outside the first area, a history position of the touch and slide operation in the first area closest to the current touch position is acquired as the second position.
When the touch slide operation of the user reaches the outside of the first area (second area), the last position of the touch slide operation of the user in the first area is acquired as a second position.
In a possible implementation manner, in response to a first trigger operation in a first area in the virtual scene interface, obtaining control movement configuration information, where the control movement configuration information is used to indicate whether to allow movement of the position of the open mirror control; and in response to the control movement configuration information indicating the position allowing the mirror opening control to be moved, moving the mirror opening control from the first position to a second position corresponding to the first trigger operation.
Because the first trigger operation can change the position of the open mirror control through the trigger operation of the user on the display device of the terminal, and can cause a factor which is difficult to control on the proceeding process of the game program, before the function is realized, the user permission corresponding to the terminal needs to be determined through the control movement configuration information, and only when the user permission allows the open mirror control to move, the open mirror control can be moved through the first trigger operation in the first area.
When the acquired control movement configuration information indicates that the position of the open mirror control is not allowed to be moved, at this time, the user can only realize various functions corresponding to the controls displayed on the virtual scene interface or a visual angle rotation function corresponding to the first area of the virtual scene interface through the trigger operation in the first area, and cannot move the open mirror control.
In one possible implementation, a control management interface is presented; the control management interface comprises configuration options of the open mirror control; and generating the control movement configuration information in response to the configuration operation of the configuration options of the mirror-opening control.
Please refer to fig. 7, which illustrates a schematic diagram of a control management interface according to an embodiment of the present application. As shown in fig. 7, the following mirror-opening control 701 is configured to configure control movement configuration information corresponding to the mirror-opening control, where 701 shows that the following mirror-opening function is already started, and when the following mirror-opening function is started, the control movement configuration information in the game program client is configured to allow the position of the mirror-opening control to be moved; or the user can trigger the opening control 701 to change the opening control 701 from the opening state to the closing state, and at this time, the control movement configuration information in the game program client is configured to disallow the movement of the position of the open mirror control; that is, the user may trigger the open control 701 by touching or sliding, so that the open control switches between the open state and the closed state.
And 403, in response to the first specified operation on the second position, displaying the mirror-opening picture in the virtual scene interface.
The second position is the position where the mirror opening control moves according to the first trigger operation, that is, the second position is the current position of the mirror opening control.
In one possible implementation, in response to the first specified operation being a click operation that is separated from the first trigger operation, the open mirror is presented in the virtual scene interface.
When the first designated operation is a click operation separated from the first trigger operation, that is, when the user clicks or slides on the screen through the first trigger operation, there is a situation that no operation signal is triggered on the terminal virtual scene interface, taking the application to the mobile terminal as an example, the user may withdraw the finger and directly perform a click operation on the open mirror control after the open mirror control is moved due to the click or the slide of the user, so as to display the open mirror image in the virtual scene interface.
That is to say, when the terminal receives a first trigger operation of a user in the first area and moves the mirror opening control to the second position corresponding to the first trigger operation, the user can cancel the original operation, and directly click the mirror opening control moved to the position corresponding to the original finger or pointer device of the user, without moving to the first position corresponding to the mirror opening control initially and then clicking to realize mirror opening, so that the moving process of triggering the mirror opening control by the user is reduced, and the human-computer interaction efficiency is improved.
In one possible implementation, in response to the first specified operation being a long press operation executed for the first trigger operation, and the duration of the long press operation reaching a first time threshold, the open mirror screen is presented in the virtual scene interface.
When the first trigger operation is a touch operation, the first specified operation is a continuous touch operation which is different after the touch operation of a user and keeps the position; when the first trigger operation is a touch slide operation, the performing operation is a continuous touch operation for keeping the final touch operation immovable according to the final touch position of the first trigger operation.
When the terminal detects that the long-time pressing operation on the second position (namely the current position of the mirror-opening control) reaches the first time threshold, the user can be considered to find enemies on the virtual scene interface corresponding to the visual angle at the moment, and the terminal executes the mirror-opening operation at the moment, so that the space interactive operation between the user and the application program client is simplified, and the human-computer interaction efficiency is improved.
In one possible implementation manner, the mirror-opening picture is closed in the virtual scene interface in response to a third specified operation on the second position.
The third specified operation is a long press operation executed for continuing the first trigger operation, and the duration time of the long press operation reaches a second time threshold; the second time threshold is greater than the first time threshold.
After the mirror opening control is moved according to the first trigger operation, the duration executed at the position corresponding to the first trigger operation reaches a first time threshold value long-time pressing operation, and at the moment, the mirror opening control responds to the long-time pressing operation and reaches the first time threshold value, and a mirror opening picture is displayed on a virtual scene interface; and at the moment, if the user still continues the first trigger operation to execute the long-time pressing operation and the long-time pressing operation reaches a second time threshold which is larger than the first time threshold, triggering the mirror-opening control again to cancel the display of the mirror-opening picture from the virtual scene interface.
Please refer to fig. 8, which illustrates a schematic view of an open mirror screen according to an embodiment of the present application. As shown in fig. 8, when the open mirror control is triggered through a first specified operation to implement the open mirror picture corresponding to the open mirror control, an alignment scale 800 exists in the open mirror picture, and the alignment scale 800 is displayed in the center of the virtual scene interface. The mirror opening picture also has a mirror opening control 801, wherein the mirror opening control 801 is configured to close the mirror opening picture in the virtual scene interface when receiving a trigger operation of a user.
Step 404, in response to a second trigger operation on the first area in the virtual scene interface, moving the open mirror control from the second position to a third position corresponding to the second trigger operation.
At this time, the virtual scene interface displays the open mirror picture.
In a possible implementation manner, the virtual scene interface further includes a virtual scene picture, and the open mirror interface is displayed on the virtual scene picture.
When the terminal receives a second trigger operation in the first area in the virtual scene interface, the terminal may move the open mirror control from the second position corresponding to the first area before the mirror opening to the third position corresponding to the second trigger operation according to the trigger operation, which is similar to step 402 and is not described herein again.
Step 405, in response to a second specified operation on the third position, closing the mirror-opening picture in the virtual scene interface.
In one possible implementation, the second specified operation is the same type of operation as the first specified operation.
When the user executes a second specified operation on the third position (the position where the mirror-opening control is currently located), closing the mirror-opening picture in the virtual scene interface, and displaying the virtual scene picture below the mirror-opening picture. That is to say, when the first specified operation shown in step 403 is performed on the mirror-opening control at the third position again, the mirror-opening picture displayed in the virtual scene interface may be closed, where an execution process of the second specified operation is similar to that in step 403, and is not described here again.
Please refer to fig. 9, which illustrates a schematic diagram of a trigger position determination process according to an embodiment of the present application. As shown in fig. 9, for an operation control, especially an operation control at a mobile end, a small area is usually triggered when a user presses with a finger, and it is difficult to reach a certain pixel point, so the open mirror trigger area 901 is a triggerable area of the operation control at the mobile end, and when the user touches any area in a circular ring shown in the open mirror trigger area 901 with a finger, the area triggered by the user is considered as an area corresponding to the open mirror control.
In one possible implementation, the finger presses into the ring area to trigger the switch mirror operation, which is calculated as follows:
acquiring the central position (X, Y) of the mirror-opening control, the finger pressing position (X1, Y1), and according to a distance formula between two points:
Figure BDA0002807582640000191
AB can be calculated as its distance and then compared to the configured radius, and when smaller than the radius, it is in the trigger zone.
Illustratively, in the application background, the configuration of the position and the radius may be as follows:
“xpos”:-309,
“ypos”:202,
“zpos”:0,
“Scale”:0.6,
as shown above, xpos is the coordinate of the center position X, ypos is the coordinate of the center position Y, zpos is the coordinate of the center position Z, and Scale is the radius of the trigger ring corresponding to the open mirror control. The movement of the open mirror button can be achieved by changing the values of xpos and ypos.
Because the scheme provided by the embodiment of the application realizes the easy operation of the control unconventionally, the control can not be used by a player with a low level, the control can be moved along with the triggering operation of the user in the first area only by opening the control moving function in the setting panel because the function needs to be controlled to be opened or not through the setting panel (control management interface). In the shooting game, the operation area of the player is roughly divided into a left side and a right side, the left side is a moving operation area, the right side is a rotating operation area, the mirror-opening button is an area on the right side of the screen, the player can trigger the orientation change of a game character when pressing a finger in the rotating area, but any other button cannot be influenced, and after the player turns on the switch, the mirror-opening button can follow the position of the finger pressed by the player in the rotating area. The mirror-opening button can always follow the finger as long as the finger slides and rotates in the right area, and the mirror-opening button can be fixed at the position of the finger when the finger leaves the screen after the finger of the player leaves the operation area.
However, the mirror is not opened immediately when the mirror opening key moves to the finger area, but certain conditions need to be met, one condition is that a pressed position is recorded after the finger touches the screen, the current sliding position of the finger is recorded when the finger slides, the distance between the two positions cannot exceed planned configuration, the second condition is that certain time needs to be waited, and when the two conditions are met, mirror opening or mirror closing operation is triggered.
If the player only wants to trigger the orientation change function at present, for example, the sliding distance of the player is larger than the first condition, the mirror-opening button only moves along with the fingers, the player releases the fingers and then clicks the mirror-opening button immediately, the mirror-opening or mirror-closing operation is triggered at the moment, and the design realizes that the mirror-opening button always follows the fingers of the player, so that the player can trigger the mirror-opening or mirror-closing operation only by quickly releasing the fingers and then pressing the fingers when the player wants to open the mirror, and the operation is very convenient and fast.
In a possible implementation manner of the embodiment of the present application, conditions for triggering the mirror-on button are as follows:
1. if the finger is pressed down and is in the mirror opening area, the mirror opening and closing operation is immediately executed.
2. If the finger is not in the mirror opening area when being pressed, the mirror opening moving logic is triggered to move the mirror opening button to the position pressed by the finger, then a certain time is waited, and the sliding distance of the finger of the player does not exceed a set value in the certain time, and then the mirror opening and mirror closing operation is triggered.
In summary, according to the method for displaying the open mirror picture provided by the application, the trigger operation of the user in the virtual scene interface is obtained, the open mirror control is moved to the position corresponding to the trigger operation, and when the user executes the first specified operation on the position where the open mirror control is located, the open mirror picture is directly displayed in the virtual scene interface. According to the scheme, the triggering operation of the user in the virtual scene interface is directly acquired, the mirror opening control is moved to the position corresponding to the triggering operation, the user can directly interact with the mirror opening control at the position to display a mirror opening picture at the moment, and the finger or pointer equipment does not need to be moved to the initial position of the mirror opening control to trigger the mirror opening function, so that the operation required by the user for realizing the mirror opening function is simplified, and the human-computer interaction efficiency in the virtual scene interface is improved.
Please refer to fig. 10, which is a schematic diagram illustrating a mirror opening determination process according to an exemplary embodiment. As shown in fig. 10, the game application client is taken as an example of a game application client that is a mobile terminal device. S1001, a user sets the open mirror mobile control to be in an open state in a control management interface of the game application client, and starts an open mirror following function to realize the open mirror control mobile operation shown in the scheme. S1002, the terminal judges whether the user triggers the screen, and if the screen is not triggered, the terminal continues to monitor the trigger instruction of the user to the screen. S1003, when the terminal detects that the user triggers the screen, judging whether the position of the screen triggered by the user is in a rotating area (namely a first area); and if the area of the screen triggered by the user is not in the rotating area, continuously monitoring the trigger instruction of the user to the screen. S1004, when the position of the user trigger screen is in the rotation area, confirming that the trigger orientation rotation function is turned on. S1005, determining whether the specific position of the user-triggered screen is in the mirror-opening region (i.e. the position of the mirror-opening control), and when the specific position of the user-triggered screen is in the mirror-opening region in the rotation region, directly triggering the mirror-opening operation. S1006, when the specific position of the user trigger screen is outside the open area in the rotation area, the open button (i.e. the open control) is moved to the position of the user trigger screen (which may be the position pressed by the finger of the user when applied to the mobile device).
And S1007, after the mirror opening button moves to the position where the user triggers the screen, judging whether a mirror opening condition is met, namely judging whether the user does not lift a finger and continuously presses the position until a first time threshold value after the mirror opening button moves to the position where the user triggers the screen, and executing mirror opening operation after the mirror opening time is met. And S1008, when the terminal executes the mirror-on operation and displays the mirror-on picture, detecting whether the finger of the user leaves the screen, when the finger of the user does not leave the screen and the touch pressing time reaches the first time threshold value again, triggering the mirror-on control again, executing the mirror-off operation and canceling the display of the mirror-on picture. S1009, when it is detected that the finger of the user leaves the screen, the mirror-opening button stops the moving operation.
Fig. 11 is a block diagram illustrating a device for displaying an open mirror image according to an exemplary embodiment of the present application, where the device for displaying an open mirror image may be applied to a computer device, and the computer device may be a terminal, or may be a terminal and a server, where the terminal may be the terminal shown in fig. 1. As shown in fig. 11, the open mirror picture display device includes:
a virtual interface display module 1101, configured to display a virtual scene interface; a mirror opening control is displayed at a first position of the virtual scene interface; the mirror opening control is used for triggering the mirror opening picture to be displayed in the virtual scene interface; the picture for opening the mirror is a picture when aiming is carried out through a virtual aiming device arranged on the virtual item;
a first open mirror control moving module 1102, configured to, in response to a first trigger operation on a first region in the virtual scene interface, move the open mirror control from the first position to a second position corresponding to the first trigger operation;
a mirror-opening picture displaying module 1103, configured to display the mirror-opening picture in the virtual scene interface in response to a first specified operation on the second location.
In one possible implementation, the open-mirror picture displaying module 1103 is configured to,
and in response to the first specified operation being a long press operation executed for the first trigger operation, and the duration of the long press operation reaching a first time threshold, displaying the open mirror picture in the virtual scene interface.
In one possible implementation, the open-mirror picture displaying module 1103 is configured to,
in response to the first specified operation being a click operation separated from the first trigger operation, the open mirror screen is displayed in the virtual scene interface.
In one possible implementation manner, the first open mirror control moving module 1102 is configured to,
and in response to the first trigger operation being a touch sliding operation and the second position being the current touch position of the touch sliding operation, moving the open mirror control from the first position to the second position.
In one possible implementation manner, the first open mirror control moving module 1102 is further configured to,
and in response to the first trigger operation being a touch sliding operation, the second position being a current touch position of the touch sliding operation, and a sliding distance of the touch sliding operation being greater than a sliding distance threshold, moving the open mirror control from the first position to the second position.
In one possible implementation manner, the first open mirror control moving module 1102 is further configured to,
and adjusting a visual angle corresponding to a scene picture displayed in the virtual scene interface based on the touch sliding operation.
In one possible implementation manner, the first open mirror control moving module 1102 includes:
the coordinate information acquisition unit is used for responding to a first trigger operation in a first area in the virtual scene interface and acquiring coordinate information of a second position corresponding to the first trigger operation;
and the coordinate information updating unit is used for updating the coordinate information of the mirror opening control based on the coordinate information of the second position.
In one possible implementation, the apparatus further includes:
the second open mirror control moving module is used for responding to a second trigger operation in a first area in the virtual scene interface and moving the open mirror control from the second position to a third position corresponding to the second trigger operation;
and the first mirror-opening picture closing module is used for responding to a second specified operation of the third position and closing the mirror-opening picture in the virtual scene interface.
In one possible implementation, the apparatus further includes:
a second picture-in-mirror closing module, configured to close the picture in the virtual scene interface in response to a third specified operation on the second location;
wherein the third specified operation is a long press operation executed for the first trigger operation, and the duration of the long press operation reaches a second time threshold; the second time threshold is greater than the first time threshold.
In one possible implementation manner, the first open mirror control moving module 1102 includes:
a mobile configuration obtaining unit, configured to obtain control mobile configuration information in response to a first trigger operation in a first region in the virtual scene interface, where the control mobile configuration information is used to indicate whether to allow movement of a position of the open mirror control;
and the mirror opening control moving unit is used for responding to the position where the mirror opening control is allowed to move in response to the control movement configuration information, and moving the mirror opening control from the first position to a second position corresponding to the first trigger operation.
In one possible implementation manner, the first open mirror control moving module 1102 includes:
the management interface display unit is used for displaying the control management interface; the control management interface comprises configuration options of the open mirror control;
and the configuration information generating unit is used for responding to the configuration operation of the configuration options of the open mirror control and generating the control movement configuration information.
In summary, according to the method for displaying the open mirror picture provided by the application, the trigger operation of the user in the virtual scene interface is obtained, the open mirror control is moved to the position corresponding to the trigger operation, and when the user executes the first specified operation on the position where the open mirror control is located, the open mirror picture is directly displayed in the virtual scene interface. According to the scheme, the triggering operation of the user in the virtual scene interface is directly acquired, the mirror opening control is moved to the position corresponding to the triggering operation, the user can directly interact with the mirror opening control at the position to display a mirror opening picture at the moment, and the finger or pointer equipment does not need to be moved to the initial position of the mirror opening control to trigger the mirror opening function, so that the operation required by the user for realizing the mirror opening function is simplified, and the human-computer interaction efficiency in the virtual scene interface is improved.
Fig. 12 is a block diagram illustrating the structure of a computer device 1200 according to an example embodiment. The computer device may be implemented as a server in the above-mentioned aspects of the present application.
The computer apparatus 1200 includes a Central Processing Unit (CPU) 1201, a system Memory 1204 including a Random Access Memory (RAM) 1202 and a Read-Only Memory (ROM) 1203, and a system bus 1205 connecting the system Memory 1204 and the CPU 1201. The computer device 1200 also includes a basic Input/Output system (I/O system) 1206, which facilitates transfer of information between various devices within the computer, and a mass storage device 1207 for storing an operating system 1213, application programs 1214, and other program modules 1215.
The basic input/output system 1206 includes a display 1208 for displaying information and an input device 1209, such as a mouse, keyboard, etc., for a user to input information. Wherein the display 1208 and input device 1209 are connected to the central processing unit 1201 through an input-output controller 1210 coupled to the system bus 1205. The basic input/output system 1206 may also include an input/output controller 1210 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 1210 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1207 is connected to the central processing unit 1201 through a mass storage controller (not shown) connected to the system bus 1205. The mass storage device 1207 and its associated computer-readable media provide non-volatile storage for the computer device 1200. That is, the mass storage device 1207 may include a computer-readable medium (not shown) such as a hard disk or a Compact disk-Only Memory (CD-ROM) drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other solid state Memory technology, CD-ROM, Digital Versatile Disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1204 and mass storage device 1207 described above may be collectively referred to as memory.
The computer device 1200 may also operate as a remote computer connected to a network via a network, such as the internet, in accordance with various embodiments of the present disclosure. That is, the computer device 1200 may connect to the network 1212 through a network interface unit 1211 coupled to the system bus 1205, or may connect to other types of networks or remote computer systems (not shown) using the network interface unit 1211.
The memory further includes at least one instruction, at least one program, a code set, or an instruction set, where the at least one instruction, at least one program, a code set, or an instruction set is stored in the memory, and the central processing unit 1201 implements all or part of the steps in the flowchart of the open mirror display method shown in each of the above embodiments by executing the at least one instruction, at least one program, code set, or instruction set.
Fig. 13 is a block diagram illustrating the structure of a computer device 1300 according to an example embodiment. The computer device 1300 may be a terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a laptop computer, or a desktop computer. Computer device 1300 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, computer device 1300 includes: a processor 1301 and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1301 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1301 may further include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. The memory 1302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer-readable storage medium in memory 1302 is configured to store at least one instruction for execution by processor 1301 to implement the open picture presentation method provided by method embodiments herein.
In some embodiments, computer device 1300 may also optionally include: a peripheral interface 1303 and at least one peripheral. Processor 1301, memory 1302, and peripheral interface 1303 may be connected by a bus or signal line. Each peripheral device may be connected to the peripheral device interface 1303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, display screen 1305, camera assembly 1306, audio circuitry 1307, positioning assembly 1308, and power supply 1309.
Peripheral interface 1303 may be used to connect at least one peripheral associated with I/O (Input/Output) to processor 1301 and memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1301, the memory 1302, and the peripheral device interface 1303 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1304 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1304 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1305 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1305 is a touch display screen, the display screen 1305 also has the ability to capture touch signals on or over the surface of the display screen 1305. The touch signal may be input to the processor 1301 as a control signal for processing. At this point, the display 1305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1305 may be one, providing the front panel of the computer device 1300; in other embodiments, the display 1305 may be at least two, respectively disposed on different surfaces of the computer device 1300 or in a folded design; in still other embodiments, the display 1305 may be a flexible display disposed on a curved surface or on a folded surface of the computer device 1300. Even further, the display 1305 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1305 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for realizing voice communication. The microphones may be multiple and placed at different locations on the computer device 1300 for stereo sound acquisition or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1301 or the radio frequency circuitry 1304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1307 may also include a headphone jack.
The Location component 1308 is used to locate the current geographic Location of the computer device 1300 for navigation or LBS (Location Based Service). The Positioning component 1308 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
The power supply 1309 is used to supply power to the various components in the computer device 1300. The power source 1309 may be alternating current, direct current, disposable or rechargeable. When the power source 1309 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyro sensor 1312, pressure sensor 1313, fingerprint sensor 1314, optical sensor 1315, and proximity sensor 1316.
The acceleration sensor 1311 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the computer apparatus 1300. For example, the acceleration sensor 1311 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1301 may control the display screen 1305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1311. The acceleration sensor 1311 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1312 may detect a body direction and a rotation angle of the computer device 1300, and the gyro sensor 1312 may cooperate with the acceleration sensor 1311 to collect a 3D motion of the user with respect to the computer device 1300. Processor 1301, based on the data collected by gyroscope sensor 1312, may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 1313 may be disposed on the side bezel of the computer device 1300 and/or underneath the display screen 1305. When the pressure sensor 1313 is disposed on the side frame of the computer device 1300, a user's holding signal to the computer device 1300 may be detected, and the processor 1301 performs left-right hand recognition or shortcut operation according to the holding signal acquired by the pressure sensor 1313. When the pressure sensor 1313 is disposed at a lower layer of the display screen 1305, the processor 1301 controls an operability control on the UI interface according to a pressure operation of the user on the display screen 1305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1314 is used for collecting the fingerprint of the user, and the processor 1301 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 1314, or the fingerprint sensor 1314 identifies the identity of the user according to the collected fingerprint. When the identity of the user is identified as a trusted identity, the processor 1301 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1314 may be disposed on the front, back, or side of the computer device 1300. When a physical key or vendor Logo is provided on the computer device 1300, the fingerprint sensor 1314 may be integrated with the physical key or vendor Logo.
The optical sensor 1315 is used to collect the ambient light intensity. In one embodiment, the processor 1301 may control the display brightness of the display screen 1305 according to the ambient light intensity collected by the optical sensor 1315. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1305 is increased; when the ambient light intensity is low, the display brightness of the display screen 1305 is reduced. In another embodiment, the processor 1301 can also dynamically adjust the shooting parameters of the camera assembly 1306 according to the ambient light intensity collected by the optical sensor 1315.
The proximity sensor 1316, also known as a distance sensor, is typically disposed on a front panel of the computer device 1300. The proximity sensor 1316 is used to capture the distance between the user and the front face of the computer device 1300. In one embodiment, the processor 1301 controls the display 1305 to switch from the bright screen state to the dark screen state when the proximity sensor 1316 detects that the distance between the user and the front face of the computer device 1300 gradually decreases; the display 1305 is controlled by the processor 1301 to switch from the breath-screen state to the light-screen state when the proximity sensor 1316 detects that the distance between the user and the front face of the computer device 1300 is gradually increasing.
Those skilled in the art will appreciate that the architecture shown in FIG. 13 is not intended to be limiting of the computer device 1300, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a non-transitory computer readable storage medium including instructions, such as a memory including at least one instruction, at least one program, set of codes, or set of instructions, executable by a processor to perform all or part of the steps of the method illustrated in the corresponding embodiment of fig. 3 or 4 is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, which comprises computer instructions, which are stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device executes all or part of the steps of the method shown in the corresponding embodiment of fig. 3 or fig. 4.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (15)

1. A method for displaying an open-mirror picture is characterized in that the method is used for a terminal and comprises the following steps:
displaying a virtual scene interface; a mirror opening control is displayed at a first position of the virtual scene interface; the mirror opening control is used for triggering the mirror opening picture to be displayed in the virtual scene interface; the picture for opening the mirror is a picture when aiming is carried out through a virtual aiming device arranged on the virtual item;
responding to a first trigger operation in a first area in the virtual scene interface, and moving the open mirror control from the first position to a second position corresponding to the first trigger operation;
and displaying the open mirror picture in the virtual scene interface in response to a first specified operation on the second position.
2. The method according to claim 1, wherein the presenting the open mirror in the virtual scene interface in response to the first specified operation on the second position comprises:
and in response to the first specified operation being a long press operation executed for the first trigger operation, and the duration of the long press operation reaching a first time threshold, displaying the open mirror picture in the virtual scene interface.
3. The method according to claim 1, wherein the presenting the open mirror in the virtual scene interface in response to the first specified operation on the second position comprises:
in response to the first specified operation being a click operation separated from the first trigger operation, the open mirror screen is displayed in the virtual scene interface.
4. The method according to any one of claims 1 to 3, wherein the moving the open mirror control from the first position to a second position corresponding to a first trigger operation in response to the first trigger operation on a first region in the virtual scene interface comprises:
and in response to the first trigger operation being a touch sliding operation and the second position being the current touch position of the touch sliding operation, moving the open mirror control from the first position to the second position.
5. The method of claim 4, wherein moving the open mirror control from the first position to the second position in response to the first trigger operation being a touch slide operation and the second position being a current touch position of the touch slide operation comprises:
and in response to the first trigger operation being a touch sliding operation, the second position being a current touch position of the touch sliding operation, and a sliding distance of the touch sliding operation being greater than a sliding distance threshold, moving the open mirror control from the first position to the second position.
6. The method of claim 4, further comprising:
and adjusting a visual angle corresponding to a scene picture displayed in the virtual scene interface based on the touch sliding operation.
7. The method of claim 1, wherein the moving the open mirror control from the first position to a second position corresponding to a first trigger operation in response to the first trigger operation on a first region in the virtual scene interface comprises:
responding to a first trigger operation in a first area in the virtual scene interface, and acquiring coordinate information of a second position corresponding to the first trigger operation;
and updating the coordinate information of the mirror opening control based on the coordinate information of the second position.
8. The method of claim 1, further comprising:
responding to a second trigger operation in a first area of the virtual scene interface, and moving the open mirror control from the second position to a third position corresponding to the second trigger operation;
and closing the mirror-opening picture in the virtual scene interface in response to a second specified operation on the third position.
9. The method of claim 2, further comprising:
in response to a third specified operation on the second position, closing the mirror-opening picture in the virtual scene interface;
wherein the third specified operation is a long press operation executed for the first trigger operation, and the duration of the long press operation reaches a second time threshold; the second time threshold is greater than the first time threshold.
10. The method of claim 1, wherein the moving the open mirror control from the first position to a second position corresponding to a first trigger operation in response to the first trigger operation on a first region in the virtual scene interface comprises:
responding to a first trigger operation in a first area in the virtual scene interface, and acquiring control movement configuration information, wherein the control movement configuration information is used for indicating whether the position of the open mirror control is allowed to be moved;
and in response to the control movement configuration information indicating the position allowing the mirror opening control to be moved, moving the mirror opening control from the first position to a second position corresponding to the first trigger operation.
11. The method of claim 10, wherein, prior to moving the open mirror control from the first position to a second position corresponding to a first trigger operation in response to the first trigger operation in a first region of the virtual scene interface, further comprising:
displaying a control management interface; the control management interface comprises configuration options of the open mirror control;
and responding to the configuration operation of the configuration options of the open mirror control, and generating the control movement configuration information.
12. The method of any one of claims 1 to 11, wherein the first region is a right half region of the virtual scene interface.
13. The mirror-opening picture display device is used for a terminal, and comprises:
the virtual interface display module is used for displaying a virtual scene interface; a mirror opening control is displayed at a first position of the virtual scene interface; the mirror opening control is used for triggering the mirror opening picture to be displayed in the virtual scene interface; the picture for opening the mirror is a picture when aiming is carried out through a virtual aiming device arranged on the virtual item;
the first open mirror control moving module is used for responding to a first trigger operation in a first area in the virtual scene interface and moving the open mirror control from the first position to a second position corresponding to the first trigger operation;
and the open mirror picture display module is used for responding to a first specified operation on the second position and displaying the open mirror picture in the virtual scene interface.
14. A computer device comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the open mirror picture presentation method according to any one of claims 1 to 12.
15. A computer-readable storage medium, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the open mirror picture presentation method according to any one of claims 1 to 12.
CN202011377797.9A 2020-11-30 2020-11-30 Open mirror picture display method and device, computer equipment and storage medium Active CN112354181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011377797.9A CN112354181B (en) 2020-11-30 2020-11-30 Open mirror picture display method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011377797.9A CN112354181B (en) 2020-11-30 2020-11-30 Open mirror picture display method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112354181A true CN112354181A (en) 2021-02-12
CN112354181B CN112354181B (en) 2022-12-30

Family

ID=74535700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011377797.9A Active CN112354181B (en) 2020-11-30 2020-11-30 Open mirror picture display method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112354181B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113318431A (en) * 2021-06-02 2021-08-31 网易(杭州)网络有限公司 In-game aiming control method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104436657A (en) * 2014-12-22 2015-03-25 青岛烈焰畅游网络技术有限公司 Method and device for controlling game and electronic equipment
WO2016208930A1 (en) * 2015-06-26 2016-12-29 주식회사 레드덕 Automatic aiming system and method for mobile game
CN106959812A (en) * 2016-01-11 2017-07-18 北京英雄互娱科技股份有限公司 Method and apparatus for man-machine interaction
CN108854063A (en) * 2018-04-24 2018-11-23 网易(杭州)网络有限公司 Method of sight, device, electronic equipment and storage medium in shooting game
CN109718559A (en) * 2018-12-24 2019-05-07 努比亚技术有限公司 Game control method, mobile terminal and computer readable storage medium
CN110141869A (en) * 2019-04-11 2019-08-20 腾讯科技(深圳)有限公司 Method of controlling operation thereof, device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104436657A (en) * 2014-12-22 2015-03-25 青岛烈焰畅游网络技术有限公司 Method and device for controlling game and electronic equipment
WO2016208930A1 (en) * 2015-06-26 2016-12-29 주식회사 레드덕 Automatic aiming system and method for mobile game
CN106959812A (en) * 2016-01-11 2017-07-18 北京英雄互娱科技股份有限公司 Method and apparatus for man-machine interaction
CN108854063A (en) * 2018-04-24 2018-11-23 网易(杭州)网络有限公司 Method of sight, device, electronic equipment and storage medium in shooting game
CN109718559A (en) * 2018-12-24 2019-05-07 努比亚技术有限公司 Game control method, mobile terminal and computer readable storage medium
CN110141869A (en) * 2019-04-11 2019-08-20 腾讯科技(深圳)有限公司 Method of controlling operation thereof, device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113318431A (en) * 2021-06-02 2021-08-31 网易(杭州)网络有限公司 In-game aiming control method and device
CN113318431B (en) * 2021-06-02 2023-12-19 网易(杭州)网络有限公司 In-game aiming control method and device

Also Published As

Publication number Publication date
CN112354181B (en) 2022-12-30

Similar Documents

Publication Publication Date Title
CN110694261B (en) Method, terminal and storage medium for controlling virtual object to attack
CN111589131B (en) Control method, device, equipment and medium of virtual role
CN110115838B (en) Method, device, equipment and storage medium for generating mark information in virtual environment
CN110413171B (en) Method, device, equipment and medium for controlling virtual object to perform shortcut operation
CN109529319B (en) Display method and device of interface control and storage medium
CN110507993B (en) Method, apparatus, device and medium for controlling virtual object
WO2021143259A1 (en) Virtual object control method and apparatus, device, and readable storage medium
CN111589133B (en) Virtual object control method, device, equipment and storage medium
CN110585710B (en) Interactive property control method, device, terminal and storage medium
CN111589130B (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN113398571B (en) Virtual item switching method, device, terminal and storage medium
CN111589132A (en) Virtual item display method, computer equipment and storage medium
CN112169325B (en) Virtual prop control method and device, computer equipment and storage medium
CN112083848B (en) Method, device and equipment for adjusting position of control in application program and storage medium
CN112569596B (en) Video picture display method and device, computer equipment and storage medium
CN111475029B (en) Operation method, device, equipment and storage medium of virtual prop
CN111744184A (en) Control display method in virtual scene, computer equipment and storage medium
CN110448908B (en) Method, device and equipment for applying sighting telescope in virtual environment and storage medium
CN111659117A (en) Virtual object display method and device, computer equipment and storage medium
CN111921194A (en) Virtual environment picture display method, device, equipment and storage medium
CN112451969B (en) Virtual object control method, device, computer equipment and storage medium
CN113289331A (en) Display method and device of virtual prop, electronic equipment and storage medium
CN112870715A (en) Virtual item putting method, device, terminal and storage medium
WO2021143253A1 (en) Method and apparatus for operating virtual prop in virtual environment, device, and readable medium
CN113713382A (en) Virtual prop control method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40037960

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant