CN114392565A - Virtual photographing method, related device, equipment and storage medium - Google Patents

Virtual photographing method, related device, equipment and storage medium Download PDF

Info

Publication number
CN114392565A
CN114392565A CN202210013318.8A CN202210013318A CN114392565A CN 114392565 A CN114392565 A CN 114392565A CN 202210013318 A CN202210013318 A CN 202210013318A CN 114392565 A CN114392565 A CN 114392565A
Authority
CN
China
Prior art keywords
virtual
virtual object
displaying
shooting
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210013318.8A
Other languages
Chinese (zh)
Inventor
崔鸥
范思超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210013318.8A priority Critical patent/CN114392565A/en
Publication of CN114392565A publication Critical patent/CN114392565A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5255Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding

Abstract

The application discloses a virtual photographing method relating to the fields of games and social contact, wherein an application scene at least comprises various terminals, such as: mobile phones, computers, vehicle-mounted terminals, and the like. The method comprises the following steps: responding to the object invitation instruction, and displaying a first adding control on a preset area when the first virtual object is in a non-interactive state; responding to touch operation aiming at the first adding control, and displaying a first object sub-page; in response to a selection operation for a plurality of second virtual objects in the first object sub-page, displaying the plurality of second virtual objects and the first virtual object; and responding to the shooting instruction, and performing virtual shooting processing on the game scene to obtain a first virtual shooting image. The application provides a related device, equipment and a storage medium. According to the method and the device, on one hand, the controllability of the player on the shot picture can be improved, and therefore a good shooting effect is achieved. On the other hand, richer playing methods are provided for the players, and the interestingness of game application is increased.

Description

Virtual photographing method, related device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a virtual photographing method, a related apparatus, a device, and a storage medium.
Background
With the development and popularization of computer device technology, more and more game applications emerge, and virtual worlds in the field of electronic games become more and more vivid. In recent years, a photographing mode has been added to many game applications. Players can start their own cameras in the game application at any time and any place, take interesting moments and leave a nice memory.
The photographing mode in the game application may be understood as a way to quickly capture a picture in the game application. Currently, in game applications, a player can control a game character to move to a certain position in a scene for self-shooting, and can also wait for opportunities on an action path where other virtual objects appear.
However, the inventors have found that at least the following problems exist in the existing solutions: the existing shooting mode is single, mainly aims at capturing a time screenshot, has poor controllability on a shot picture, and cannot achieve a good shooting effect.
Disclosure of Invention
The embodiment of the application provides a virtual photographing method, a related device, equipment and a storage medium. On the one hand, the controllability of the player on the shot picture can be improved, and therefore a good shooting effect is achieved. On the other hand, a richer playing method is provided for the player, the interestingness of the game application is increased, and the user viscosity of the game application is favorably improved.
In view of the above, an aspect of the present application provides a virtual photographing method, including:
responding to the object invitation instruction, and displaying a first adding control on a preset area when a first virtual object is in a non-interactive state, wherein the first virtual object is a virtual object controlled by a target object;
responding to touch operation aiming at the first adding control, and displaying a first object sub-page, wherein the first object sub-page comprises K virtual objects, and K is an integer larger than 1;
in response to a selection operation for a plurality of second virtual objects in the first object sub-page, displaying the plurality of second virtual objects and the first virtual object, wherein the plurality of second virtual objects are contained in the K virtual objects;
and responding to the shooting instruction, performing virtual shooting processing on a game scene to obtain a first virtual shooting image, wherein the game scene comprises a plurality of second virtual objects and the first virtual object.
Another aspect of the present application provides a virtual camera device, including:
the display module is used for responding to the object invitation instruction, and displaying a first adding control on a preset area when the first virtual object is in a non-interactive state, wherein the first virtual object is a virtual object controlled by a target object;
the display module is further used for responding to touch operation aiming at the first adding control and displaying a first object sub-page, wherein the first object sub-page comprises K virtual objects, and K is an integer larger than 1;
the display module is also used for responding to the selection operation of a plurality of second virtual objects in the first object sub-page, and displaying the plurality of second virtual objects and the first virtual object, wherein the plurality of second virtual objects are contained in the K virtual objects;
and the shooting module is used for responding to the shooting instruction and virtually shooting the game scene to obtain a first virtually shot image, wherein the game scene comprises a plurality of second virtual objects and the first virtual object.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the display module is further used for responding to the object invitation instruction, and then displaying a second adding control on the interactive object when the first virtual object is in the interactive state, wherein the interactive object is an object supporting interaction with the virtual object;
the display module is also used for responding to the touch operation aiming at the second adding control and displaying a second object sub-page;
the display module is also used for responding to the selection operation of a third virtual object in the second object sub-page and displaying the third virtual object and the first virtual object;
and the shooting module is also used for responding to the shooting instruction and carrying out virtual shooting processing on the game scene to obtain a second virtual shooting image, wherein the game scene comprises a third virtual object and a first virtual object.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the display module is specifically configured to display T virtual objects on the first object sub-page, where each virtual object in the T virtual objects is a player character, and T is an integer greater than or equal to 1 and less than or equal to K.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the display module is specifically used for displaying the object invitation controls corresponding to the plurality of second virtual objects on the first object sub-page;
responding to touch operation aiming at the object invitation control, and sending a photo-combination request to target terminal equipment, wherein the target terminal equipment is the terminal equipment for controlling a plurality of second virtual objects;
and if the photo matching request response is successful, displaying a plurality of second virtual objects and the first virtual object.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the display module is specifically configured to display T virtual objects on the first object sub-page, where each virtual object in the T virtual objects is a non-player character, and T is an integer greater than or equal to 1 and less than or equal to K.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the display module is specifically used for displaying object invitation controls corresponding to a plurality of second virtual objects on the first object sub-page, wherein interaction values between the plurality of second virtual objects and the first virtual object are greater than or equal to an interaction threshold value;
and responding to the touch operation aiming at the object invitation control, and displaying a plurality of second virtual objects and the first virtual object.
In one possible design, in another implementation manner of another aspect of the embodiment of the present application, the virtual photographing apparatus includes a control module;
the display module is further used for responding to an action setting instruction aiming at a second virtual object after a plurality of second virtual objects and the first virtual object are displayed, and then displaying a first action subpage, wherein the first action subpage comprises an action which can be executed by the second virtual object, and the second virtual object is a non-player character;
the control module is used for responding to selection operation of a first action in the first action sub-page and controlling the second virtual object to execute the first action;
alternatively, the first and second electrodes may be,
the display module is further used for responding to an action setting instruction aiming at the first virtual object after the plurality of second virtual objects and the first virtual object are displayed, and displaying a second action subpage, wherein the second action subpage comprises an action which can be executed by the first virtual object;
and the control module is also used for responding to the selection operation of the second action in the second action sub-page and controlling the first virtual object to execute the second action.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the display module is further used for responding to a handheld setting instruction aiming at the second virtual object after the plurality of second virtual objects and the first virtual object are displayed, and then displaying a first handheld sub-page, wherein the first handheld sub-page comprises a handheld object which can be used by the second virtual object, and the second virtual object is a non-player character;
the display module is also used for responding to the selection operation of the first handheld object in the first handheld sub page, and displaying a second virtual object holding the first handheld object;
alternatively, the first and second electrodes may be,
the display module is further used for responding to a handheld setting instruction aiming at the first virtual object after the plurality of second virtual objects and the first virtual object are displayed, and displaying a second handheld sub page, wherein the second handheld sub page comprises a handheld object which can be used by the first virtual object;
and the display module is also used for responding to the selection operation of the second handheld object in the second handheld sub page and displaying the first virtual object holding the second handheld object.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the display module is further used for responding to a mood setting instruction aiming at the second virtual object after the plurality of second virtual objects and the first virtual object are displayed, and then displaying a first mood sub-page, wherein the first mood sub-page comprises a mood which can be expressed by the second virtual object, and the second virtual object is a non-player character;
a display module, further configured to display the second virtual object with the first mood in response to a selection operation for the first mood in the first mood sub-page;
alternatively, the first and second electrodes may be,
the display module is further used for responding to a mood setting instruction aiming at the first virtual object after the plurality of second virtual objects and the first virtual object are displayed, and displaying a second mood sub-page, wherein the second mood sub-page comprises moods which can be expressed by the first virtual object;
and the display module is also used for responding to the selection operation aiming at the second mood in the second mood sub-page and displaying the first virtual object with the second mood.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the display module is further used for responding to the special effect display instruction after the plurality of second virtual objects and the first virtual objects are displayed, and displaying a scene special effect sub-page, wherein the scene special effect sub-page comprises a playable scene special effect;
and the display module is also used for responding to the selection operation aiming at the target scene special effect in the scene special effect sub-page and playing the target scene special effect.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the display module is also used for responding to a shooting setting instruction and displaying M object controls, wherein each object control corresponds to one type of virtual object, and M is an integer greater than or equal to 1;
the display module is further used for canceling the display of the first virtual object if responding to a first selection operation aiming at the first object control, wherein the first object control is contained in the M object controls, and the first object control corresponds to the first virtual object;
the display module is further used for canceling the display of the associated virtual object if the display module responds to the first selection operation of the second object control, wherein the associated virtual object and the first virtual object belong to the same game team, the second object control is contained in the M object controls, and the second object control corresponds to the associated virtual object;
the display module is further configured to cancel displaying the non-associated virtual object if the first selection operation for the third object control is responded, where a friend relationship is not established between the non-associated virtual object and the first virtual object, the third object control is included in the M object controls, and the third object control corresponds to the non-associated virtual object;
and the display module is further used for canceling the display of the non-player character if the first selection operation aiming at the fourth object control is responded, wherein the fourth object control is contained in the M object controls, and the fourth object control corresponds to the non-player character.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the display module is further used for displaying the first virtual object if responding to the second selection operation aiming at the first object control after the M object controls are displayed;
the display module is further used for displaying the associated virtual object if responding to a second selection operation aiming at a second object control after the M object controls are displayed;
the display module is further used for displaying the non-associated virtual object if responding to a second selection operation aiming at a third object control after the M object controls are displayed;
and the display module is also used for displaying the non-player character if responding to the second selection operation aiming at the fourth object control after the M object controls are displayed.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the display module is also used for responding to the shooting setting instruction and displaying the lens following control;
and the control module is further used for controlling the sight line direction of the first virtual object to face the virtual camera if the lens following control is responded to the selection operation, wherein the virtual camera is used for carrying out virtual shooting processing.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the display module is also used for displaying the picture after the shooting angle is adjusted if the adjustment operation aiming at the shooting angle is responded;
and the display module is also used for displaying the picture after the shooting distance is adjusted if the adjustment operation aiming at the shooting distance is responded.
In one possible design, in another implementation manner of another aspect of the embodiment of the present application, the virtual photographing apparatus further includes a saving module;
the storage module is used for responding to the shooting instruction, virtually shooting the game scene to obtain a first virtual shooting image, and then responding to the storage instruction to store the first virtual shooting image into the electronic album;
and the display module is also used for displaying the first virtual shot image if responding to a viewing instruction aiming at the electronic album.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the shooting module is specifically used for responding to a shooting instruction and carrying out virtual shooting processing on a game scene to obtain a first image;
carrying out depth of field processing on the first image to obtain a second image;
and adjusting the far transition area of the second image to obtain a first virtual shot image.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the shooting module is specifically used for responding to a shooting instruction and carrying out virtual shooting processing on a game scene to obtain a third image;
carrying out depth of field processing on the third image to obtain a fourth image;
and adjusting the far transition area of the fourth image to obtain a second virtual shot image.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the shooting module is specifically used for responding to a shooting instruction and carrying out virtual shooting processing on a game scene to obtain a first image to be processed;
scaling the first image to be processed to a target size to obtain a first sampling image;
and performing cropping processing on the first sampling image to obtain a first image to be processed.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
and the shooting module is specifically used for carrying out reduction processing on the first image to be processed by adopting an interpolation algorithm to obtain a first sampling image with a target size.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
the shooting module is specifically used for responding to a shooting instruction and carrying out virtual shooting processing on a game scene to obtain a second image to be processed;
scaling the second image to be processed to a target size to obtain a second sampling image;
and performing clipping processing on the second sampling image to obtain a second virtual shot image.
In one possible design, in another implementation of another aspect of an embodiment of the present application,
and the shooting module is specifically used for carrying out reduction processing on the second image to be processed by adopting an interpolation algorithm to obtain a second sampling image with a target size.
Another aspect of the present application provides a terminal device, including: a memory, a processor, and a bus system;
wherein, the memory is used for storing programs;
a processor for executing the program in the memory, the processor for performing the above-described aspects of the method according to instructions in the program code;
the bus system is used for connecting the memory and the processor so as to enable the memory and the processor to communicate.
Another aspect of the present application provides a computer-readable storage medium having stored therein instructions, which when executed on a computer, cause the computer to perform the method of the above-described aspects.
In another aspect of the application, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided by the above aspects.
According to the technical scheme, the embodiment of the application has the following advantages:
in the embodiment of the application, a virtual photographing method is provided, and a terminal device displays a first object sub-page. The terminal device responds to the selection operation of the plurality of second virtual objects in the first object sub-page, and displays the plurality of second virtual objects and the first virtual object. Based on the virtual shooting method, the terminal device responds to the shooting instruction and carries out virtual shooting processing on the game scene to obtain a first virtual shooting image. Through the method, the player can actively invite other virtual objects to enter the game scene to be shot and trigger the shooting function, so that the group photo of the game role controlled by the player and other virtual objects is realized. Therefore, on the one hand, the controllability of the player on the shot picture can be improved, and a good shooting effect is achieved. On the other hand, a richer playing method is provided for the player, the interestingness of the game application is increased, and the user viscosity of the game application is favorably improved.
Drawings
Fig. 1 is a schematic diagram of an architecture of a virtual camera system in an embodiment of the present application;
fig. 2 is a schematic flow chart of a virtual photo setting method in the embodiment of the present application;
FIG. 3 is a diagram of a custom shader in an embodiment of the present application;
fig. 4 is a schematic flowchart of a virtual photographing method in an embodiment of the present application;
FIG. 5 is a schematic diagram of an interface for displaying a first object sub-page in a non-interactive state according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an interface for displaying a second object sub-page in an interactive state according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an interface displaying a first object sub-page in an embodiment of the present application;
FIG. 8 is a schematic diagram of an interface for triggering a co-ordinate request based on a player character in an embodiment of the present application;
FIG. 9 is a schematic diagram of another interface for displaying a first object sub-page in an embodiment of the present application;
FIG. 10 is a schematic diagram of an interface for triggering a co-ordinate request based on a non-player character in an embodiment of the present application;
FIG. 11 is a schematic diagram of an interface for setting actions for a non-player character in an embodiment of the present application;
FIG. 12 is a schematic diagram of an interface for setting actions for a player character in an embodiment of the present application;
FIG. 13 is a schematic diagram of an interface for providing a handheld object for a non-player character in accordance with an embodiment of the present application;
FIG. 14 is a schematic diagram of an interface for providing a handheld object for a player character in accordance with an embodiment of the present application;
FIG. 15 is a schematic illustration of an interface for mood setting for a non-player character in an embodiment of the present application;
FIG. 16 is a schematic illustration of an interface for setting mood for a player character in an embodiment of the present application;
FIG. 17 is a schematic diagram of an interface for setting a scene effect in an embodiment of the present application;
FIG. 18 is a schematic illustration of an interface for hiding player characters in an embodiment of the present application;
FIG. 19 is a schematic diagram of an interface for hiding player characters of the same team in an embodiment of the present application;
FIG. 20 is a schematic diagram of an interface for hiding non-buddy roles in an embodiment of the present application;
FIG. 21 is a schematic illustration of an interface for hiding non-player characters in an embodiment of the present application;
FIG. 22 is a schematic view of an interface for adjusting a gaze of a character in accordance with an embodiment of the present application;
FIG. 23 is a schematic view of an interface for adjusting the photographing angle according to an embodiment of the present application;
FIG. 24 is a schematic view of an interface for adjusting the distance and angle in the embodiment of the present application;
FIG. 25 is a schematic diagram of a virtual photography effect in the embodiment of the present application;
fig. 26 is a schematic diagram of a virtual camera device in an embodiment of the present application;
fig. 27 is a schematic structural diagram of a terminal device in the embodiment of the present application.
Detailed Description
The embodiment of the application provides a virtual photographing method, a related device, equipment and a storage medium. On the one hand, the controllability of the player on the shot picture can be improved, and therefore a good shooting effect is achieved. On the other hand, a richer playing method is provided for the player, the interestingness of the game application is increased, and the user viscosity of the game application is favorably improved.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "corresponding" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
With the progress of the technology, the image quality of the game is continuously improved, and the virtual world in the game field becomes more and more vivid. The game producer adds a photographing mode in the game, so that the player can catch various wonderful moments at any time during the game. The photographing mode can be embedded into different types of game applications (e.g., simulation type games), the simulation type games include, but are not limited to, simulation run games, simulation grow games, simulation sand table games, etc., and the present application takes the simulation run games as an example, which should not be construed as a limitation to the present application.
In order to improve controllability of a player on a shot picture and improve a shooting effect of virtual shooting, the application provides a virtual shooting method, which is applied to a virtual shooting system shown in fig. 1, as shown in the figure, the virtual shooting system includes a server and a terminal device, and a client is deployed on the terminal device, wherein the client may run on the terminal device in a browser form, or run on the terminal device in an independent Application (APP) form, and a specific presentation form of the client is not limited herein. The server related to the application can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, safety service, Content Delivery Network (CDN), big data and an artificial intelligence platform. The terminal device may be a smart phone, a tablet computer, a notebook computer, a palm computer, a personal computer, a smart television, a smart watch, a vehicle-mounted device, a wearable device, and the like, but is not limited thereto. The terminal device and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein. The number of servers and terminal devices is not limited. The scheme provided by the application can be independently completed by the terminal device, can also be independently completed by the server, and can also be completed by the cooperation of the terminal device and the server, so that the application is not particularly limited.
Based on the virtual photographing system shown in fig. 1, a game scene can be virtually photographed. For easy understanding, please refer to fig. 2, where fig. 2 is a schematic flow chart of a virtual photo setting method in an embodiment of the present application, and as shown in the figure, specifically:
in step S1, the user triggers an interactive page view instruction on the client.
In step S2, the client sends an instruction to the server to query the interactive unlocking information, and the interactive content available to the user needs to be unlocked. For example, user a may not be able to provide a "smile" expression without unlocking the "smile" expression.
In step S3, the server queries the interactive content unlocked by the user according to the instruction sent by the client.
In step S4, the server transmits information for feedback action unlocking to the client.
In step S5, the client displays the available interactive content.
In step S6, the user clicks on the selected interactive contents.
In step S7, the client responds to the click operation by the user and notifies the user to play.
In step S9, the client presents the interactive content to the user.
In step S10, the elements displayed on the Interface may also be subjected to explicit and implicit control, that is, the objects in the game are subjected to explicit and implicit control, unrelated objects are hidden, other player characters are hidden, and a special User Interface (UI) manager is implemented to uniformly hide the designated UIs, for example, the main UI and the heads-up display (HUD) prompt layer of the objects unrelated to shooting are implemented.
In step S11, the client performs a screenshot to provide a preview of the photographed image. The current game picture is fetched from a frame buffer (Framebuffer) to generate a Texture (Texture).
Setting the screenshot texture into a pre-realized UI, playing UI animation for displaying the shooting result, playing shutter sound effect for matching, and simulating exposure highlight effect. Referring to fig. 3, fig. 3 is a schematic diagram of a custom Shader in the embodiment of the present application, as shown, the custom Shader (Shader) inputs luminance (Bright) values, and performs additive (addive) superposition of basic colors (BaseColor) through algorithm conversion to output Final colors (Final Color).
In step S12, the user clicks the save control, whereby the client can save the intercepted image.
In step S13, the screenshot texture is converted into Bitmap (Bitmap) device independent data, compressed into a Portable Network Graphics (PNG) lossless format, down-sampled to an appropriate size through a scaling process, and cross-platform clipped to a uniform aspect ratio. It should be noted that, because the screenshots of different devices are different in size, some screenshots occupy a larger capacity and some screenshots occupy a smaller capacity according to different device resolutions, and therefore, unified processing is required. When the image is reduced, an interpolation algorithm is used for image processing, so that the number of new image pixels is reduced compared with that of the original image.
In step S14, the data after the picture processing is stored in the writable directory of the different device, and the whole data processing process is completed in multiple threads, so as to optimize the overall fluency of the game. And recovering the display and hidden control, and recovering the objects which are set to be developed in the game scene.
With reference to the above description, the following describes a virtual photographing method in the present application, and referring to fig. 4, an embodiment of the virtual photographing method in the embodiment of the present application includes:
110. the terminal equipment responds to the object invitation instruction, and displays a first adding control on a preset area when a first virtual object is in a non-interactive state, wherein the first virtual object is a virtual object controlled by a target object;
in one or more embodiments, when the user triggers the object invitation instruction, responding to the object invitation instruction by the terminal equipment, and displaying the first object sub-page. The first object subpage includes K virtual objects, in one case, the K virtual objects are all presented on the first object subpage, and in another case, only part of the K virtual objects are presented on the first object subpage, which is not limited herein.
It is understood that the K virtual objects may be non-player characters (NPCs), or Player Characters (PCs), or include NPCs and PCs.
The application takes a simulated operation game as an example for introduction, and the simulated operation game can be a simulated operation game comprising a god view angle, or a simulated operation game called a third person. In the simulation operation game, a player can play the role of a manager to manage the virtual real world in the game.
120. The terminal equipment responds to touch operation aiming at the first adding control and displays a first object subpage, wherein the first object subpage comprises K virtual objects, and K is an integer larger than 1;
in one or more embodiments, the user (i.e., the target object) may invite other virtual objects to group with the first virtual object that he or she controls, upon which the user (i.e., the target object) may trigger an object invitation instruction by clicking on the first add control.
Specifically, for the convenience of understanding, please refer to fig. 5, fig. 5 is an interface diagram illustrating that the first object sub-page is displayed in the non-interactive state in the embodiment of the present application, as shown in (a) of fig. 5, a1 is used for indicating the first add control, and a2 is used for indicating the first virtual object, at which time, the first virtual object indicated by a2 is in the non-interactive state. When the user (i.e., the target object) clicks the first add control indicated by a1, an object invitation instruction is triggered, and thus, an interface as illustrated in (B) of fig. 5 is displayed.
As shown in fig. 5 (B), wherein A3 is used to indicate the first object sub-page, a4 is used to indicate the backspace control, and a5 is used to indicate the slider control. The first object subpage has displayed thereon virtual objects, for example, virtual object "BBB", virtual object "CCC", and virtual object "D". The user (i.e., the target object) may view other virtual objects on the first object sub-page by sliding the slider control indicated by a 5. When the user (i.e., the target object) clicks the rollback control indicated by a4, the interface illustrated in fig. 5 (a) is returned to.
It should be noted that the contents of the interface elements and the layout of the interface elements shown in fig. 5 are only one example, and should not be construed as limiting the present application.
130. The terminal equipment responds to selection operation of a plurality of second virtual objects in the first object sub-page, and displays the plurality of second virtual objects and the first virtual object, wherein the plurality of second virtual objects are contained in the K virtual objects;
in one or more embodiments, a user selects a plurality of virtual objects from the K virtual objects provided by the first object subpage, where the virtual objects are the second virtual objects in the present application. Based on this, the terminal device may display a game scene including a plurality of second virtual objects and the first virtual object.
The plurality of second virtual objects are non-player characters or player characters, the first virtual object is a virtual object controlled by a target object, and the target object is a user who plays the game.
140. The terminal equipment responds to the shooting instruction, carries out virtual shooting processing on a game scene to obtain a first virtual shooting image, wherein the game scene comprises a plurality of second virtual objects and a first virtual object.
In one or more embodiments, when a shooting instruction is triggered by a user, the terminal device responds to the shooting instruction and performs virtual shooting processing on a current game scene to obtain a first virtual shooting image.
It will be appreciated that although the game scene includes a plurality of second virtual objects and a first virtual object, the user may choose to hide a plurality of second virtual objects, or a first virtual object, or a plurality of second virtual objects and a first virtual object. Based on this, the first virtual photographed image may display a plurality of second virtual objects and the first virtual object, or the first virtual photographed image may display a plurality of second virtual objects, or the first virtual photographed image may display the first virtual object.
In the embodiment of the application, a virtual photographing method is provided. Through the method, the player can actively invite other virtual objects to enter the game scene to be shot and trigger the shooting function, so that the group photo of the game role controlled by the player and other virtual objects is realized. Therefore, on the one hand, the controllability of the player on the shot picture can be improved, and a good shooting effect is achieved. On the other hand, a richer playing method is provided for the player, the interestingness of the game application is increased, and the user viscosity of the game application is favorably improved.
Optionally, on the basis of the foregoing embodiments corresponding to fig. 4, in another optional embodiment provided in this embodiment of the present application, after the terminal device responds to the object invitation instruction, the method may further include:
when the first virtual object is in an interactive state, the terminal equipment displays a second adding control on an interactive object, wherein the interactive object is an object supporting interaction with the virtual object;
the terminal equipment responds to the touch operation aiming at the second adding control and displays a second object sub-page;
the terminal equipment responds to the selection operation of a third virtual object in the second object sub-page and displays the third virtual object and the first virtual object;
and the terminal equipment responds to the shooting instruction, performs virtual shooting processing on a game scene to obtain a second virtual shooting image, wherein the game scene comprises a third virtual object and a first virtual object.
In one or more embodiments, a manner of inviting virtual objects in an interactive state is presented. As can be seen from the foregoing embodiments, the user (i.e., the target object) may invite other virtual objects to group with the first virtual object controlled by the user, based on which the user (i.e., the target object) needs to trigger the object invitation instruction.
Specifically, for ease of understanding, please refer to fig. 6, where fig. 6 is a schematic diagram of an interface for displaying a second object sub-page in an interactive state according to an embodiment of the present application, as shown in fig. 6 (a), B1 is used to indicate a second add control, and B2 is used to indicate a first virtual object, where the first virtual object indicated by B2 is in the interactive state. When the user (i.e., the target object) clicks the second add control indicated by B1, the object invitation instruction is triggered, thereby displaying the interface as illustrated in (B) of fig. 6.
As shown in fig. 6 (B), wherein B3 is used to indicate the second object sub-page, B4 is used to indicate the backspace control, and B5 is used to indicate the slider control. The second object subpage has displayed thereon virtual objects, for example, virtual object "BBB", virtual object "CCC", and virtual object "D". The user (i.e., the target object) may view other virtual objects on the second object sub-page by sliding the slider control indicated by B5. When the user (i.e., the target object) clicks the rollback control indicated by B4, the interface illustrated in fig. 6 (a) is returned to.
It should be noted that the contents of the interface elements and the layout of the interface elements shown in fig. 6 are only one example, and should not be construed as limiting the present application.
Secondly, in the embodiment of the present application, a manner of inviting virtual objects in an interactive state is provided, and through the manner, when a first virtual object controlled by a user (i.e., a target object) is in the interactive state, other virtual objects can be invited to enter a current game scene, thereby increasing the diversity of game playing methods.
Optionally, on the basis of each embodiment corresponding to fig. 4, in another optional embodiment provided in the embodiment of the present application, the displaying, by the terminal device, the first object sub-page may specifically include:
the terminal device displays T virtual objects on the first object sub-page, wherein each virtual object in the T virtual objects is a player character, and T is an integer which is greater than or equal to 1 and less than or equal to K.
In one or more embodiments, a manner of inviting player characters is presented. As can be appreciated from the foregoing embodiments, users (i.e., target objects) may invite player characters controlled by other users to join a game scene.
Specifically, for ease of understanding, please refer to fig. 7, where fig. 7 is a schematic interface diagram illustrating a first object sub-page in the embodiment of the present application, and as shown in the figure, 3 virtual objects shown on the first object sub-page are taken as an example, that is, at this time, T is equal to 3. Wherein the virtual object "BBB" and the virtual object "CCC" belong to player characters controlled by game friends of the user (i.e., the target object). And the virtual object "DDD" belongs to a player character controlled by a non-game friend (i.e., stranger) of the user (i.e., the target object).
It should be noted that the contents of the interface elements and the layout of the interface elements shown in fig. 7 are only one example, and should not be construed as limiting the present application.
Secondly, in the embodiment of the application, a method for inviting the player role is provided, and through the method, the user can actively invite the player roles controlled by other users to join in a game scene, so that on one hand, the diversity of playing methods can be increased, and on the other hand, the communication and the interaction sense among the users can be enhanced.
Optionally, on the basis of each embodiment corresponding to fig. 4, in another optional embodiment provided in the embodiment of the present application, the displaying, by the terminal device, the plurality of second virtual objects and the first virtual object in response to a selection operation for the plurality of second virtual objects in the first object sub-page may specifically include:
the terminal equipment displays object invitation controls corresponding to the second virtual objects on the first object sub-page;
the terminal equipment responds to touch operation aiming at the object invitation control and sends a co-shooting request to target terminal equipment, wherein the target terminal equipment is the terminal equipment for controlling a plurality of second virtual objects;
and if the lighting request response is successful, the terminal equipment displays a plurality of second virtual objects and the first virtual object.
In one or more embodiments, a manner in which a player character joins a game scene is described. As can be seen from the foregoing embodiments, a user (i.e., a target object) may invite a player character controlled by another user to join a group photo, and based on this, the user controlling the player character decides whether to join the group photo, and in case that the user responds to a group photo request, the player character may automatically seek to a game scene where the first virtual object is located.
Specifically, for ease of understanding, the virtual object "BBB" will be described below as the plurality of second virtual objects. Referring to fig. 8, fig. 8 is a schematic diagram of an interface for triggering a contract request based on a player character in the embodiment of the present application, as shown in fig. 8 (a), C1 is used to indicate account information of a second virtual object, where a nickname of the second virtual object is "BBB", and the second virtual object belongs to a player character controlled by a friend user of a user (i.e., a target object). C2 is used to indicate an object invitation control. C3 is used to indicate the first virtual object.
When the user (i.e., the target object) clicks the object invitation control indicated by C2, a lighting request is triggered, and thus, the lighting request may be sent to the target terminal device controlling the second virtual object. The co-lighting request response is successful if the user controlling the second virtual object agrees to join the co-lighting. Thus, an interface as shown in (B) in fig. 8 is displayed, in which C4 is used to indicate the second virtual object.
It should be noted that the user (i.e., the target object) may also invite other second virtual objects in a similar manner, and the above example takes the example of adding any one second virtual object as an example. The contents of the interface elements and the layout of the interface elements shown in fig. 8 are only one illustration and should not be construed as limiting the present application.
In the embodiment of the application, a mode that the player character joins in the game scene is provided, through the mode, the user can invite the player character controlled by other users to join in the group photo, and other users can decide whether to join in the group photo according to the requirements of the users, so that the interaction of the game is promoted, and the communication among the users can be promoted.
Optionally, on the basis of each embodiment corresponding to fig. 4, in another optional embodiment provided in the embodiment of the present application, the displaying, by the terminal device, the first object sub-page may specifically include:
the terminal device displays T virtual objects on the first object sub-page, wherein each virtual object in the T virtual objects is a non-player character, and T is an integer which is greater than or equal to 1 and less than or equal to K.
In one or more embodiments, a manner of inviting non-player characters is presented. As can be appreciated from the foregoing embodiments, a user (i.e., a target object) may invite a non-player character to join a game scenario.
Specifically, for convenience of understanding, please refer to fig. 9, where fig. 9 is another interface schematic diagram showing a first object sub-page in the embodiment of the present application, and as shown in the figure, 3 virtual objects shown on the first object sub-page are taken as an example, that is, at this time, T is equal to 3. Wherein, the virtual object "BBB", the virtual object "CCC" and the virtual object "DDD" all belong to non-player characters. Each virtual object corresponds to an interaction value, and a higher interaction value indicates that the non-player character interacts with the first virtual object more frequently, i.e., the non-player character has a greater "goodness" to the first virtual object.
It should be noted that the contents of the interface elements and the layout of the interface elements shown in fig. 9 are merely illustrative, and should not be construed as limiting the present application.
Secondly, in the embodiment of the application, a mode of inviting the non-player character is provided, and through the mode, the user can actively invite the non-player character to join in the game scene, so that the diversity of the playing method is increased.
Optionally, on the basis of each embodiment corresponding to fig. 4, in another optional embodiment provided in the embodiment of the present application, the displaying, by the terminal device, the plurality of second virtual objects and the first virtual object in response to a selection operation for the plurality of second virtual objects in the first object sub-page may specifically include:
the terminal equipment displays object invitation controls corresponding to a plurality of second virtual objects on a first object sub-page, wherein interaction values between the plurality of second virtual objects and the first virtual object are greater than or equal to an interaction threshold value;
and the terminal equipment responds to the touch operation aiming at the object invitation control and displays a plurality of second virtual objects and the first virtual object.
In one or more embodiments, a manner in which a non-player character joins a game scene is described. As can be seen from the foregoing embodiments, a user (i.e., a target object) may invite a non-player character to join a group photo, and the invited non-player character automatically enters the game scene in which the first virtual object is located.
Specifically, for ease of understanding, the virtual object "BBB" will be described below as the plurality of second virtual objects. Referring to fig. 10, fig. 10 is a schematic diagram of an interface for triggering a reference request based on a non-player character in the embodiment of the present application, as shown in fig. 10 (a), D1 is used to indicate account information of a second virtual object, where the nickname of the second virtual object is "BBB", and the interaction value between the second virtual object and the first virtual object is "90". Assuming that the interaction threshold is "70," an object invitation control indicated by D2 may be displayed. D3 is used to indicate the first virtual object.
Illustratively, the interaction value between the virtual object "CCC" and the first virtual object is "10", i.e., the interaction value is less than the interaction threshold, and thus, the object invitation control of the virtual object "CCC" is not displayed.
When the user (i.e., the target object) clicks the object invitation control indicated by D2, a touch operation for the object invitation control is triggered, thereby displaying an interface as shown in (B) of fig. 10, wherein D4 is used to indicate the second virtual object.
It should be noted that the user (i.e., the target object) may also invite other second virtual objects in a similar manner, and the above example takes the example of adding any one second virtual object as an example. The contents of the interface elements and the layout of the interface elements shown in fig. 10 are only one illustration and should not be construed as limiting the present application.
In the embodiment of the application, a mode that a non-player character joins in a game scene is provided, and through the mode, a user can invite the non-player character with an interaction value larger than or equal to an interaction threshold value to join in a group photo, so that on one hand, the user can be promoted to interact with the non-player character in the game, and on the other hand, the diversity of playing methods can be increased.
Optionally, on the basis of each embodiment corresponding to fig. 4, in another optional embodiment provided in this embodiment of the present application, after the terminal device displays the plurality of second virtual objects and the first virtual object, the method may further include:
the terminal equipment responds to an action setting instruction aiming at a second virtual object, and displays a first action sub-page, wherein the first action sub-page comprises an action which can be executed by the second virtual object, and the second virtual object is a non-player character;
the terminal equipment responds to the selection operation of the first action in the first action sub-page and controls the second virtual object to execute the first action;
alternatively, the first and second electrodes may be,
after the terminal device displays the plurality of second virtual objects and the first virtual object, the method may further include:
the terminal equipment responds to the action setting instruction aiming at the first virtual object, and displays a second action subpage, wherein the second action subpage comprises an action which can be executed by the first virtual object;
and the terminal equipment controls the first virtual object to execute the second action in response to the selection operation of the second action in the second action sub-page.
In one or more embodiments, a manner of setting actions for virtual objects is presented. As can be seen from the foregoing embodiments, a user (i.e., a target object) may configure actions for a first virtual object under its own control or may configure actions for a plurality of second virtual objects that are not under its own control. The following description will be made with reference to the drawings.
Illustratively, the second virtual object belongs to a non-player character, taking the action of configuring any one of the second virtual objects as an example. Referring to fig. 11, fig. 11 is a schematic diagram of an interface for setting actions for a non-player character in the embodiment of the present application, as shown in fig. 11 (a), E1 is used to indicate a second virtual object, E2 is used to indicate an action setting control, and since the second virtual object is selected at this time, when a user (i.e., a target object) clicks the action setting control indicated by E2, an action setting instruction for the second virtual object is triggered, thereby displaying the interface shown in fig. 11 (B).
The "stretching" action is taken as the first action. As shown in fig. 11 (B), wherein E3 is used to indicate the first action sub-page, and E4 is used to indicate the "unfold" action control. When the user (i.e., the target object) clicks the "stretch" action control indicated by E4, a selection operation for the first action is triggered, and thus, an interface as shown in fig. 11 (C) is displayed. As can be seen, the second virtual object indicated by E5 performs the first action (i.e., the "unfold" action).
Illustratively, the first virtual object is a player character controlled by a user (i.e., a target object), taking as an example an action to configure the first virtual object. Referring to fig. 12, fig. 12 is a schematic diagram of an interface for setting actions for a player character in the embodiment of the present application, as shown in (a) of fig. 12, F1 is used to indicate a first virtual object, and F2 is used to indicate an action setting control, since the first virtual object is selected at this time, when a user (i.e., a target object) clicks the action setting control indicated by F2, an action setting instruction for the first virtual object is triggered, thereby displaying the interface shown in (B) of fig. 12.
The "stretching" action is taken as the second action. As shown in fig. 12 (B), wherein F3 is used to indicate the second action sub-page, and F4 is used to indicate the "stretch" action control. When the user (i.e., the target object) clicks the "stretch" action control indicated by F4, a selection operation for the second action is triggered, and thus, an interface as shown in fig. 12 (C) is displayed. As can be seen, the first virtual object indicated by F5 performs a second action (i.e., a "stretch" action).
It should be noted that the user (i.e., the target object) may also configure other second virtual objects in a similar manner, and the above example is described by taking an example of adding any one second virtual object. The contents of the interface elements and the layout of the interface elements shown in fig. 11 and 12 are only one illustration and should not be construed as limiting the present application. It will be appreciated that the executable actions provided on the action subpage may be obtained in advance during the course of the game, or may be purchased using a game resource (e.g., gold coins), or may be obtained in other ways, not limited herein.
Next, in the embodiment of the present application, a manner of setting an action for a virtual object is provided, and by this manner, a function of setting a photographing action is added to a virtual object and a non-player character controlled by a user (i.e., a target object). Based on the above, in the game process, the user (namely, the target object) can set various preset elements, so that the content of the photographing behavior is enriched to a certain extent, and more fun is provided for the player.
Optionally, on the basis of each embodiment corresponding to fig. 4, in another optional embodiment provided in this embodiment of the present application, after the terminal device displays the plurality of second virtual objects and the first virtual object, the method may further include:
the terminal equipment responds to a handheld setting instruction for a second virtual object, and displays a first handheld sub-page, wherein the first handheld sub-page comprises a handheld object which can be used by the second virtual object, and the second virtual object is a non-player character;
the terminal equipment responds to selection operation of a first handheld object in the first handheld sub page, and displays a second virtual object holding the first handheld object;
alternatively, the first and second electrodes may be,
after the terminal device displays the plurality of second virtual objects and the first virtual object, the method may further include:
the terminal equipment responds to a handheld setting instruction aiming at the first virtual object, and displays a second handheld sub-page, wherein the second handheld sub-page comprises a handheld object capable of being used by the first virtual object;
and the terminal equipment responds to the selection operation of the second handheld object in the second handheld sub page and displays the first virtual object holding the second handheld object.
In one or more embodiments, a manner of providing a handheld object for a virtual object is presented. As can be seen from the foregoing embodiments, the user (i.e., the target object) may match the handheld object with a first virtual object under its own control, or may match the handheld object with a plurality of second virtual objects under no own control. The following description will be made with reference to the drawings.
Illustratively, the second virtual object belongs to a non-player character, taking as an example a handheld object configuring any one of the second virtual objects. Referring to fig. 13, fig. 13 is an interface diagram illustrating setting a handheld object for a non-player character in the embodiment of the present application, as shown in fig. 13 (a), G1 is used to indicate a second virtual object, G2 is used to indicate a handheld setting control, and since the second virtual object is selected at this time, when a user (i.e., a target object) clicks the handheld setting control indicated by G2, a handheld setting instruction for the second virtual object is triggered, thereby displaying an interface shown in fig. 13 (B).
The first hand-held object is 'potted'. As shown in fig. 13 (B), G3 is used to indicate the first handheld sub-page, and G4 is used to indicate the corresponding handheld control of "potted plant". When the user (i.e., the target object) clicks the handheld control indicated by G4, a selection operation for the first handheld object is triggered, and thus, an interface as shown in (C) of fig. 13 is displayed. As can be seen, the second virtual object indicated at G5 holds the first handheld object (i.e., "potted").
Illustratively, the first virtual object is a player character controlled by a user (i.e., a target object), such as a handheld object that configures the first virtual object. Referring to fig. 14, fig. 14 is an interface diagram illustrating setting of a handheld object for a player character in the embodiment of the present application, as shown in fig. 14 (a), H1 is used to indicate a first virtual object, and H2 is used to indicate a handheld setting control, since the first virtual object is selected at this time, when a user (i.e., a target object) clicks the handheld setting control indicated by H2, a handheld setting instruction for the first virtual object is triggered, thereby displaying an interface shown in fig. 14 (B).
A "potted plant" is used as the second hand-held object. As shown in fig. 14 (B), H3 is used to indicate the second handheld sub-page, and H4 is used to indicate the corresponding handheld control of "potted plant". When the user (i.e., the target object) clicks the handheld control indicated by H4, a selection operation for the second handheld object is triggered, and thus, an interface as shown in (C) of fig. 14 is displayed. As can be seen, the first virtual object, indicated at H5, holds a second handheld object (i.e., "potted").
It should be noted that the user (i.e., the target object) may also configure other second virtual objects in a similar manner, and the above example is described by taking an example of adding any one second virtual object. The contents of the interface elements and the layout of the interface elements shown in fig. 13 and 14 are only one illustration and should not be construed as limitations of the present application. It will be appreciated that the usable hand-held items provided on the hand-held subpage may be obtained in advance during the course of the game, or may be purchased using a game resource (e.g., gold coins), or may be obtained in other ways, without limitation.
Secondly, in the embodiment of the present application, a manner of setting a handheld object for a virtual object is provided, and in this manner, a function of setting a character handheld object is added to a virtual object and a non-player character controlled by a user (i.e., a target object). Based on the above, in the game process, the user (namely, the target object) can set various preset elements, so that the content of the photographing behavior is enriched to a certain extent, and more fun is provided for the player.
Optionally, on the basis of each embodiment corresponding to fig. 4, in another optional embodiment provided in this embodiment of the present application, after the terminal device displays the plurality of second virtual objects and the first virtual object, the method may further include:
the terminal equipment responds to a mood setting instruction aiming at the second virtual object, and displays a first mood sub-page, wherein the first mood sub-page comprises a mood which can be expressed by the second virtual object, and the second virtual object is a non-player character;
the terminal equipment responds to selection operation aiming at the first mood in the first mood sub-page and displays a second virtual object with the first mood;
alternatively, the first and second electrodes may be,
after the terminal device displays the plurality of second virtual objects and the first virtual object, the method may further include:
the terminal equipment responds to the mood setting instruction aiming at the first virtual object, and then displays a second mood sub-page, wherein the second mood sub-page comprises the mood which can be expressed by the first virtual object;
the terminal device responds to selection operation aiming at a second mood in the second mood sub-page, and displays the first virtual object with the second mood.
In one or more embodiments, a manner of setting mood for a virtual object is presented. As can be seen from the foregoing embodiments, a user (i.e., a target object) may configure moods for a first virtual object that is controlled by the user, or may configure moods for a plurality of second virtual objects that are not controlled by the user. The following description will be made with reference to the drawings.
Illustratively, the second virtual object belongs to a non-player character, taking the action of configuring any one of the second virtual objects as an example. Referring to fig. 15, fig. 15 is a schematic diagram of an interface for setting mood for a non-player character in the embodiment of the present application, as shown in fig. 15 (a), I1 is used to indicate a second virtual object, I2 is used to indicate a mood setting control, and since the second virtual object is selected at this time, when a user (i.e., a target object) clicks the mood setting control indicated by I2, a mood setting instruction for the second virtual object is triggered, thereby displaying the interface shown in fig. 15 (B).
The "note" is taken as the first mood. As shown in (B) of fig. 15, I3 is used to indicate the first mood sub-page, and I4 is used to indicate the mood control corresponding to the "note". When the user (i.e., the target object) clicks the mood control indicated by I4, a selection operation for the first mood is triggered, and thus, an interface as shown in fig. 15 (C) is displayed. As can be seen, the second virtual object indicated by I5 exhibits a first mood (i.e., exhibits a "note"). Optionally, different moods may be collocated with different actions to express more voice-overs.
Illustratively, the first virtual object is a player character controlled by a user (i.e., a target object), taking as an example an action to configure the first virtual object. Referring to fig. 16, fig. 16 is a schematic diagram of an interface for setting mood for a player character in the embodiment of the present application, as shown in fig. 16 (a), J1 is used to indicate a first virtual object, J2 is used to indicate a mood setting control, and since the first virtual object is selected at this time, when a user (i.e., a target object) clicks the mood setting control indicated by J2, a mood setting instruction for the first virtual object is triggered, so that the interface shown in fig. 16 (B) is displayed.
The "note" is taken as the second mood. As shown in (B) of fig. 16, wherein J3 is used to indicate the second action sub-page, and J4 is used to indicate the mood control corresponding to "note". When the user (i.e., the target object) clicks the mood control indicated by J4, a selection operation for the second mood is triggered, whereby an interface as shown in fig. 16 (C) is displayed. As can be seen, the first virtual object indicated by J5 exhibits a second mood (i.e., exhibits a "note"). Optionally, different moods may be collocated with different actions to express more voice-overs.
Mood can be expressed as an expression bubble (emoji), which is a special display object that highlights the facial expressions or mental activities of a character. It can be superimposed over the 3D model and rendered into a picture. In order to enable the expression bubbles to have vivid expressive force, two main rendering modes, namely skeleton animation and particle animation, are adopted in the method. The skeleton animation can be realized by Spine (Spine) plug-in, and the particle animation can be realized by interface particle (UIParticle) plug-in. Thus, a high-quality animation effect can be realized with less pasting and production costs. In the application, the expression bubble realizes various hanging modes, such as hanging to an object (Actor), hanging to a sub-model (Mesh), hanging to a Bone (Bone), and hanging to a Slot (Slot). At the head of a 3D model of a virtual object, a plurality of key slot positions such as the upper part, the lower part, the left part, the right part, the center and the like of the face are pre-embedded, an accurate 3D hanging position can be calculated as required, then the position is converted into a 2D screen control coordinate, and a 2D self-defined offset is superposed to obtain a final screen coordinate. In addition, when the virtual object moves, the following is supported, and the relative movement which is kept fixed with the hanging position of the player is realized. When the lens is zoomed in or zoomed out, the size of the expression bubble is automatically adapted to a reasonable scaling.
It should be noted that the contents of the interface elements and the layout of the interface elements shown in fig. 15 and 16 are only one example, and should not be construed as limiting the present application. It will be appreciated that the expressible mood provided on the mood sub-page may be obtained in advance during the course of the game, or purchased using a game resource (e.g., gold coins), or otherwise obtained, and is not limited herein.
Next, in the embodiment of the present application, a manner of setting a mood for a virtual object is provided, and by this manner, a function of setting a mood of a character is added to a virtual object and a non-player character controlled by a user (i.e., a target object). Based on the above, in the game process, the user (namely, the target object) can set various preset elements, so that the content of the photographing behavior is enriched to a certain extent, and more fun is provided for the player.
Optionally, on the basis of each embodiment corresponding to fig. 4, in another optional embodiment provided in this embodiment of the present application, after the terminal device displays the plurality of second virtual objects and the first virtual object, the method may further include:
the terminal equipment responds to the special effect display instruction and displays a scene special effect sub-page, wherein the scene special effect sub-page comprises a playable scene special effect;
and the terminal equipment responds to the selection operation aiming at the target scene special effect in the scene special effect sub-page and plays the target scene special effect.
In one or more embodiments, a manner of setting scene effects is presented. As can be seen from the foregoing embodiments, a user (i.e., a target object) can set a scene special effect, thereby increasing the ambience. The following description will be made with reference to the drawings.
Specifically, for ease of understanding, please refer to fig. 17, fig. 17 is a schematic diagram of an interface for setting a scene special effect in the embodiment of the present application, and as shown in fig. 17 (a), K1 is used to indicate a special effect setting control. When the user (i.e., the target object) clicks the special effect setting control indicated by K1, a special effect display instruction is triggered, and thereby, an interface as illustrated in (B) in fig. 17 is displayed.
The maple leaves are used as the special effect of a target scene. As shown in (B) of fig. 17, K2 is used to indicate a scene special effects sub-page, and K3 is used to indicate a special effects control corresponding to "maple leaves". When the user (i.e., the target object) clicks the special effect control indicated by K3, a selection operation for the special effect of the target scene is triggered, and thus, an interface as illustrated in (C) in fig. 17 is displayed. As can be seen, in the current game scene, the target scene special effect (i.e., having the effect of "maple leaves" falling) is played.
The contents of the interface elements and the layout of the interface elements shown in fig. 17 are merely illustrative, and should not be construed as limiting the present application. It will be appreciated that the playable scene effects provided on the scene effects sub-page may be obtained in advance during the course of the game, or may be purchased using game resources (e.g., gold coins), or may be obtained in other ways, and are not limited herein.
Secondly, in the embodiment of the application, a mode for setting the scene special effect is provided, and through the mode, the function of setting the scene special effect is added. Based on this, in the game process, the user (namely, the target object) can set various preset elements, so that stronger atmosphere sense is created to a certain extent, and more pleasure is brought to the player.
Optionally, on the basis of each embodiment corresponding to fig. 4, another optional embodiment provided in the embodiments of the present application may further include:
the terminal equipment responds to a shooting setting instruction and displays M object controls, wherein each object control corresponds to one type of virtual object, and M is an integer greater than or equal to 1;
if the first virtual object is not displayed, the terminal equipment displays the first object control corresponding to the first virtual object;
if the first selection operation aiming at the second object control is responded, the terminal equipment cancels the display of the associated virtual object, wherein the associated virtual object and the first virtual object belong to the same game team, the second object control is contained in the M object controls, and the second object control corresponds to the associated virtual object;
if the first selection operation aiming at the third object control is responded, the terminal equipment cancels the display of the non-associated virtual object, wherein a friend relationship is not established between the non-associated virtual object and the first virtual object, the third object control is contained in the M object controls, and the third object control corresponds to the non-associated virtual object;
and if the first selection operation aiming at the fourth object control is responded, the terminal equipment cancels the display of the non-player character, wherein the fourth object control is contained in the M object controls, and the fourth object control corresponds to the non-player character.
In one or more embodiments, a way to hide different types of virtual characters is presented. As can be seen from the foregoing embodiments, a user (i.e., a target object) may select a virtual character that is desired to be hidden in a game scene. Wherein the player character controlled by the user (i.e., the target object) is the first virtual object. A player character (i.e., teammate) on the same game team as the first virtual object is the associated virtual object. The non-buddy role (i.e., stranger) of the first virtual object is a non-associated virtual object.
Referring to fig. 18, fig. 18 is a schematic view of an interface for hiding a player character in the embodiment of the present application, as shown in (a) of fig. 18, when a user (i.e., a target object) clicks a setting control, a shooting setting instruction is triggered, so that a setting interface indicated by L5 is displayed, and 4 object controls (i.e., M is 4) are displayed on the setting interface, where each object control corresponds to one type of virtual object. L1 is used to indicate a first virtual object, L2 is used to indicate an associated virtual object, L3 is used to indicate a non-associated virtual object, and L4 is used to indicate a non-player character (e.g., a town resident). L6 is used to indicate a first object control. When the user (i.e., the target object) clicks the first object control indicated by L6, a first selection operation for the first object control is triggered, and thus, an interface as shown in fig. 18 (B) is displayed. At this time, the first virtual object is no longer displayed in the current game scene.
Referring to fig. 19, fig. 19 is a schematic diagram of an interface for hiding player characters of the same team in the embodiment of the present application, and as shown in (a) of fig. 19, when a user (i.e., a target object) clicks a setting control, a shooting setting instruction is triggered, so that a setting interface indicated by M5 is displayed, 4 object controls (i.e., M is 4) are displayed on the setting interface, and each object control corresponds to one type of virtual object. M1 is used to indicate a first virtual object, M2 is used to indicate an associated virtual object, M3 is used to indicate a non-associated virtual object, and M4 is used to indicate a non-player character (e.g., a town resident). M6 is used to indicate a second object control. When the user (i.e., the target object) clicks the second object control indicated by M6, a first selection operation for the second object control is triggered, and thus, an interface as shown in fig. 19 (B) is displayed. At this time, the associated virtual object (i.e., teammate) is no longer displayed in the current game scene.
For example, referring to fig. 20, fig. 20 is an interface diagram illustrating that a non-friend character is hidden in the embodiment of the present application, as shown in (a) in fig. 20, when a user (i.e., a target object) clicks a setting control, a shooting setting instruction is triggered, so that a setting interface indicated by N5 is displayed, and 4 object controls (i.e., M is 4) are displayed on the setting interface, where each object control corresponds to one type of virtual object. N1 is used to indicate a first virtual object, N2 is used to indicate an associated virtual object, N3 is used to indicate a non-associated virtual object, and N4 is used to indicate a non-player character (e.g., a town resident). N6 is used to indicate a third object control. When the user (i.e., the target object) clicks the third object control indicated by N6, a first selection operation for the third object control is triggered, and thus, an interface as shown in (B) in fig. 20 is displayed. At this point, the unassociated virtual object (i.e., a stranger) is no longer displayed in the current game scene.
Referring to fig. 21, fig. 21 is a schematic view of an interface for hiding a non-player character in an embodiment of the present application, as shown in (a) of fig. 21, when a user (i.e., a target object) clicks a setting control, a shooting setting instruction is triggered, so that a setting interface indicated by O5 is displayed, and 4 object controls (i.e., M is 4) are displayed on the setting interface, where each object control corresponds to one type of virtual object. O1 for indicating a first virtual object, O2 for indicating an associated virtual object, O3 for indicating a non-associated virtual object, and O4 for indicating a non-player character (e.g., town resident). O6 is used to indicate a fourth object control. When the user (i.e., the target object) clicks the fourth object control indicated by O6, a first selection operation for the fourth object control is triggered, and thus, an interface as shown in (B) in fig. 21 is displayed. At this time, the non-player character (e.g., town resident) is no longer displayed in the current game scene.
The contents of the interface elements and the layout of the interface elements shown in fig. 18, 19, 20, and 21 are merely illustrative, and should not be construed as limiting the present application.
Secondly, in the embodiment of the present application, a method for hiding different types of virtual roles is provided, and in the above manner, a user (i.e., a target object) can control different types of virtual roles to hide according to preferences of the user, so as to achieve a better shooting effect.
Optionally, on the basis of each embodiment corresponding to fig. 4, in another optional embodiment provided in this embodiment of the present application, after the terminal device displays the M object controls, the method may further include:
if the second selection operation aiming at the first object control is responded, the terminal equipment displays the first virtual object;
if the second selection operation aiming at the second object control is responded, the terminal equipment displays the associated virtual object;
if the second selection operation aiming at the third object control is responded, the terminal equipment displays the non-associated virtual object;
if the second selection operation for the fourth object control is responded, the terminal equipment displays the non-player character.
In one or more embodiments, a manner of redisplaying different types of virtual characters is presented. As can be seen from the foregoing embodiments, a user (i.e., a target object) may select a virtual character that is desired to be hidden in a game scene. Wherein the player character controlled by the user (i.e., the target object) is the first virtual object. A player character (i.e., teammate) on the same game team as the first virtual object is the associated virtual object. The non-buddy role (i.e., stranger) of the first virtual object is a non-associated virtual object.
Illustratively, referring again to FIG. 18, when the user (i.e., the target object) clicks the first object control indicated by L6 again, a second selection operation for the first object control is triggered, whereby the first virtual object may be displayed in the current game scene.
Illustratively, referring again to FIG. 19, when the user (i.e., the target object) clicks again on the second object control indicated by M6, a second selection operation for the second object control is triggered, whereby the associated virtual object may be displayed in the current game scene.
Illustratively, referring again to FIG. 20, when the user (i.e., the target object) clicks the third object control indicated by N6 again, a second selection operation for the third object control is triggered, and thus, a non-associated virtual object may be displayed in the current game scene.
Illustratively, referring again to FIG. 21, when the user (i.e., the target object) clicks again on the fourth object control indicated by O6, a second selection operation for the fourth object control is triggered, whereby the non-player character may be displayed in the current game scene.
In the embodiment of the present application, a method for redisplaying different types of virtual characters is provided, and in the above manner, a user (i.e., a target object) can control different types of virtual characters to display according to the preference of the user, so as to achieve a better shooting effect.
Optionally, on the basis of each embodiment corresponding to fig. 4, another optional embodiment provided in the embodiments of the present application may further include:
the terminal equipment responds to a shooting setting instruction and displays a lens following control;
and if the selection operation of the lens following control is responded, the terminal equipment controls the sight line direction of the first virtual object to face the virtual camera, wherein the virtual camera is used for virtual shooting processing.
In one or more embodiments, a manner of adjusting a virtual object gaze direction is presented. As can be seen from the foregoing embodiments, the user (i.e., the target object) can set whether the virtual object looks at the lens of the virtual camera.
Specifically, for ease of understanding, please refer to fig. 22, fig. 22 is a schematic view of an interface for adjusting the line of sight of a character in the embodiment of the present application, as shown in fig. 22 (a), R1 is used to indicate a virtual object (e.g., a first virtual object). When the user (i.e., the target object) clicks the setting control, a shooting setting instruction is triggered, and thereby, the setting interface indicated by R2 is displayed. Wherein R3 is used to indicate a lens following control. At this time, the line of sight direction of the first virtual object indicated by R1 is not directed toward the virtual camera.
When the user (i.e., the target object) clicks the lens following control indicated by R3, a selection operation for the lens following control is triggered, whereby an interface as shown in fig. 22 (B) is displayed. At this time, the line of sight direction of the first virtual object is toward the virtual camera.
It should be noted that the contents of the interface elements and the layout of the interface elements shown in fig. 22 are merely illustrative, and should not be construed as limiting the present application.
Secondly, in the embodiment of the present application, a manner of adjusting the visual line direction of the virtual object is provided, and in the manner, when the virtual object is virtually photographed in a game, the visual line direction of the virtual object can be adjusted to face the virtual camera for virtually photographing, so that the photographing effect is improved.
Optionally, on the basis of each embodiment corresponding to fig. 4, another optional embodiment provided in the embodiments of the present application may further include:
if the adjustment operation aiming at the shooting angle is responded, the terminal equipment displays the picture after the adjustment of the shooting angle;
and if the adjustment operation for the shooting distance is responded, the terminal equipment displays the picture after the shooting distance is adjusted.
In one or more embodiments, a manner of controlling a virtual camera is presented. As can be seen from the foregoing embodiments, the user (i.e., the target object) can set the shooting angle and the shooting distance of the virtual camera according to actual requirements.
For example, referring to fig. 23, fig. 23 is a schematic view of an interface for adjusting a shooting angle according to an embodiment of the present application, as shown in fig. 23 (a), X1 is used to indicate an angle adjustment control, and X2 is used to indicate a shooting control. When the user (i.e., the target object) drags the angle adjustment control, an adjustment operation for the shooting angle is triggered. Assume that the user (i.e., the target object) adjusts the photographing angle to rotate clockwise by 180 degrees, thereby displaying an interface as shown in the (B) diagram in fig. 23. At this time, the picture after the adjustment of the shooting angle is displayed. When the user (i.e., the target object) clicks the photographing control indicated by X2, a photographing operation is triggered.
For example, referring to fig. 24, fig. 24 is an interface schematic diagram of adjusting a distance angle in the embodiment of the present application, as shown in fig. 24 (a), Y1 is used for indicating a distance adjustment control, and Y2 is used for indicating a shooting control. When the user (i.e., the target object) drags the distance adjustment control, an adjustment operation for the shooting distance is triggered. Assume that the user (i.e., the target object) adjusts the shooting distance to be reduced to 10% of the original image, thereby displaying the interface shown in fig. 24 (B). At this time, the screen after the shooting distance adjustment is displayed. When the user (i.e., the target object) clicks the photographing control indicated by Y2, a photographing operation is triggered.
It should be noted that the contents of the interface elements and the layout of the interface elements shown in fig. 23 and 24 are merely illustrative, and should not be construed as limiting the present application.
Secondly, in this application embodiment, a mode of controlling the virtual camera is provided, through above-mentioned mode, the shooting angle and the shooting distance of user (promptly, target object) can be according to actual demand adjustment virtual camera to promote the flexibility of shooing, and be favorable to obtaining better shooting effect.
Optionally, on the basis of each embodiment corresponding to fig. 4, in another optional embodiment provided by the embodiment of the present application, after the terminal device performs virtual shooting processing on a game scene in response to the shooting instruction, and obtains the first virtual shooting image, the method may further include:
the terminal equipment responds to the storage instruction and stores the first virtual shot image into the electronic album;
and if the viewing instruction aiming at the electronic album is responded, the terminal equipment displays the first virtual shot image.
In one or more embodiments, a manner of automatically saving a first virtual captured image is presented. As can be seen from the foregoing embodiment, when the user (i.e., the target object) clicks the save control, the save instruction is triggered, and thus, the terminal device can save the first virtual captured image into the electronic album. When a user (i.e., a target object) enters the electronic album, a viewing instruction for the electronic album is triggered, and thus, the terminal apparatus can display the first virtual captured image. In order to achieve a more realistic photographing effect, a post-processing (PostProcess) rendering process may also be performed. The following description will be made with reference to the drawings.
Specifically, for the convenience of understanding, please refer to fig. 25, in which fig. 25 is a schematic diagram of the virtual shooting effect in the embodiment of the present application, as shown in fig. 25 (a), after using the Depth of field effect (DOF), the effect is represented as a Depth of field far virtual near effect, which simulates a camera focusing process, rather than simply blurring the background, a portion closer to the virtual camera is clearer, and a portion farther from the virtual camera is blurred.
The effect of using DOF post-processing in combination with Far Transition Region (Far Transition Region) adjustment is shown in fig. 25 (B). The initial value of the far transition region determines the extent of blurring outside the focal radius, which may allow for further focus to blur the edges, e.g., the first virtual object indicated by Z1 in the figure is more blurred underfoot than the upper body, which may highlight the upper body of the virtual object, resulting in more focus convergence and effectively more regional directionality.
Based on the fuzzy effect of the scene depth, the optimal Focal length (Focal Region) value is calculated through a post-processing rendering process by using a mobile DOF function, the optimal Focal length (Focal Region) value is calculated according to the distance from the virtual camera to the close-up object and the rotating direction, then the initial value of the far transition Region is calculated, further focusing is realized to ensure that the edge is blurred, and finally, a user (namely, a target object) is finely adjusted as required through a UI.
It should be noted that the contents of the interface elements and the layout of the interface elements shown in fig. 25 are merely illustrative, and should not be construed as limiting the present application.
In one case, the terminal device responds to the shooting instruction to perform virtual shooting processing on the game scene to obtain a first image, wherein the first image is a game screenshot, and based on the first image, depth of field processing can be performed on the first image to obtain a second image. Thus, the far transition region of the second image can be adjusted, thereby obtaining the first virtual captured image. Similarly, the terminal device responds to the shooting instruction, performs virtual shooting processing on the game scene to obtain a third image, wherein the third image is the game screenshot, and based on the third image, depth of field processing can be performed on the third image to obtain a fourth image. Thus, the far transition region of the fourth image can be adjusted to obtain the second virtual captured image.
In another case, the terminal device responds to the shooting instruction, performs virtual shooting processing on the game scene to obtain a first image to be processed, where the first image to be processed is a game screenshot, and needs to be scaled to a target size based on the first image to unify shooting effects of different devices. And zooming the first to-be-processed image to a target size by adopting interpolation to obtain a first sampling image, and obtaining the first to-be-processed image through clipping processing. Similarly, the terminal device responds to the shooting instruction, performs virtual shooting processing on the game scene to obtain a second image to be processed, wherein the second image to be processed is a game screenshot, and based on the result, the target size needs to be zoomed in and zoomed out to unify shooting effects of different devices. And zooming the second image to be processed to a target size by adopting interpolation calculation to obtain a second sampling image, and obtaining the second image to be processed by cutting.
Secondly, in the embodiment of the application, a mode for automatically storing the first virtual shot image is provided, and in the mode, a user (namely, a target object) can store the shot image in an album, so that the shot image can be conveniently viewed or shared subsequently, and therefore the diversity of the scheme is improved.
Referring to fig. 26, please refer to fig. 26 for a schematic diagram of an embodiment of a virtual camera apparatus in an embodiment of the present application, in which the virtual camera apparatus 20 includes:
the display module 210 is configured to, in response to the object invitation instruction, display a first add control on a preset area when the first virtual object is in a non-interactive state, where the first virtual object is a virtual object controlled by the target object;
the display module 210 is further configured to display a first object subpage in response to a touch operation for the first add control, where the first object subpage includes K virtual objects, and K is an integer greater than 1;
the display module 210 is further configured to display, in response to a selection operation for a plurality of second virtual objects in the first object sub-page, the plurality of second virtual objects and the first virtual object, where the plurality of second virtual objects are included in the K virtual objects;
the shooting module 220 is configured to perform virtual shooting processing on a game scene in response to a shooting instruction to obtain a first virtual shooting image, where the game scene includes a plurality of second virtual objects and a first virtual object.
Alternatively, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual camera device 20 provided in the embodiment of the present application,
the display module 210 is further configured to, after responding to the object invitation instruction, display a second add control on the interactive object when the first virtual object is in the interactive state, where the interactive object is an object that supports interaction with the virtual object;
the display module 210 is further configured to display a second object sub-page in response to the touch operation for the second add control;
the display module 210 is further configured to display a third virtual object and the first virtual object in response to a selection operation for the third virtual object in the second object sub-page;
the shooting module 220 is further configured to perform virtual shooting processing on a game scene in response to the shooting instruction to obtain a second virtual shooting image, where the game scene includes a third virtual object and the first virtual object.
Alternatively, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual camera device 20 provided in the embodiment of the present application,
the display module 210 is specifically configured to display T virtual objects on the first object sub-page, where each virtual object in the T virtual objects is a player character, and T is an integer greater than or equal to 1 and less than or equal to K.
Alternatively, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual camera device 20 provided in the embodiment of the present application,
the display module 210 is specifically configured to display object invitation controls corresponding to a plurality of second virtual objects on the first object sub-page;
responding to touch operation aiming at the object invitation control, and sending a photo-combination request to target terminal equipment, wherein the target terminal equipment is the terminal equipment for controlling a plurality of second virtual objects;
and if the photo matching request response is successful, displaying a plurality of second virtual objects and the first virtual object.
Alternatively, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual camera device 20 provided in the embodiment of the present application,
the display module 210 is specifically configured to display T virtual objects on the first object sub-page, where each virtual object in the T virtual objects is a non-player character, and T is an integer greater than or equal to 1 and less than or equal to K.
Alternatively, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual camera device 20 provided in the embodiment of the present application,
the display module 210 is specifically configured to display object invitation controls corresponding to a plurality of second virtual objects on the first object sub-page, where interaction values between the plurality of second virtual objects and the first virtual object are greater than or equal to an interaction threshold;
and responding to the touch operation aiming at the object invitation control, and displaying a plurality of second virtual objects and the first virtual object.
Optionally, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual photographing apparatus 20 provided in the embodiment of the present application, the virtual photographing apparatus 20 includes a control module 230;
the display module 210 is further configured to display, after displaying the plurality of second virtual objects and the first virtual object, a first action subpage in response to an action setting instruction for the second virtual object, where the first action subpage includes an action that can be performed by the second virtual object, and the second virtual object is a non-player character;
a control module 230, configured to, in response to a selection operation for a first action in the first action sub-page, control the second virtual object to perform the first action;
alternatively, the first and second electrodes may be,
the display module 210 is further configured to, after displaying the plurality of second virtual objects and the first virtual object, in response to an action setting instruction for the first virtual object, display a second action subpage, where the second action subpage includes an action that can be performed by the first virtual object;
the control module 230 is further configured to control the first virtual object to perform the second action in response to a selection operation for the second action in the second action sub-page.
Alternatively, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual camera device 20 provided in the embodiment of the present application,
the display module 210 is further configured to display, after the plurality of second virtual objects and the first virtual object are displayed, a first handheld sub-page in response to a handheld setting instruction for the second virtual objects, where the first handheld sub-page includes a handheld object usable by the second virtual object, and the second virtual object is a non-player character;
the display module 210 is further configured to display a second virtual object holding the first handheld object in response to a selection operation for the first handheld object in the first handheld sub-page;
alternatively, the first and second electrodes may be,
the display module 210 is further configured to, after displaying the plurality of second virtual objects and the first virtual object, in response to a handheld setting instruction for the first virtual object, display a second handheld sub-page, where the second handheld sub-page includes a handheld object usable by the first virtual object;
the display module 210 is further configured to display the first virtual object holding the second handheld object in response to a selection operation for the second handheld object in the second handheld sub page.
Alternatively, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual camera device 20 provided in the embodiment of the present application,
the display module 210 is further configured to, after displaying a plurality of second virtual objects and the first virtual object, in response to a mood setting instruction for the second virtual objects, display a first mood sub-page, where the first mood sub-page includes a mood that the second virtual objects can represent, and the second virtual objects are non-player characters;
a display module 210, further configured to display the second virtual object with the first mood in response to a selection operation for the first mood in the first mood sub-page;
alternatively, the first and second electrodes may be,
the display module 210 is further configured to, after displaying the plurality of second virtual objects and the first virtual object, in response to a mood setting instruction for the first virtual object, display a second mood sub-page, where the second mood sub-page includes a mood that the first virtual object can represent;
the display module 210 is further configured to display the first virtual object with the second mood in response to a selection operation for the second mood in the second mood sub-page.
Alternatively, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual camera device 20 provided in the embodiment of the present application,
the display module 210 is further configured to display a scene special effect sub-page in response to the special effect display instruction after the plurality of second virtual objects and the first virtual object are displayed, where the scene special effect sub-page includes a playable scene special effect;
the display module 210 is further configured to play the target scene special effect in response to a selection operation for the target scene special effect in the scene special effect sub-page.
Alternatively, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual camera device 20 provided in the embodiment of the present application,
the display module 210 is further configured to display M object controls in response to the shooting setting instruction, where each object control corresponds to a type of virtual object, and M is an integer greater than or equal to 1;
the display module 210 is further configured to cancel displaying the first virtual object if the first virtual object is displayed in response to a first selection operation for a first object control, where the first object control is included in the M object controls, and the first object control corresponds to the first virtual object;
the display module 210 is further configured to cancel displaying the associated virtual object if the first selection operation for the second object control is responded, where the associated virtual object and the first virtual object belong to the same game team, the second object control is included in the M object controls, and the second object control corresponds to the associated virtual object;
the display module 210 is further configured to cancel displaying the non-associated virtual object if the first selection operation for the third object control is responded, where a friend relationship is not established between the non-associated virtual object and the first virtual object, the third object control is included in the M object controls, and the third object control corresponds to the non-associated virtual object;
the display module 210 is further configured to cancel displaying the non-player character if the first selection operation is performed on a fourth object control, where the fourth object control is included in the M object controls, and the fourth object control corresponds to the non-player character.
Alternatively, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual camera device 20 provided in the embodiment of the present application,
the display module 210 is further configured to, after displaying the M object controls, display a first virtual object if responding to a second selection operation for the first object control;
the display module 210 is further configured to, after displaying the M object controls, display an associated virtual object if responding to a second selection operation for a second object control;
the display module 210 is further configured to, after displaying the M object controls, display the non-associated virtual object if the second selection operation for the third object control is responded;
the display module 210 is further configured to, after displaying the M object controls, display the non-player character if responding to the second selection operation for the fourth object control.
Alternatively, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual camera device 20 provided in the embodiment of the present application,
the display module 210 is further configured to display a lens following control in response to the shooting setting instruction;
the control module 230 is further configured to control a line of sight direction of the first virtual object toward the virtual camera if the lens following control is selected, where the virtual camera is used for performing virtual shooting processing.
Alternatively, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual camera device 20 provided in the embodiment of the present application,
the display module 210 is further configured to display a picture adjusted by the shooting angle if the adjustment operation for the shooting angle is responded;
the display module 210 is further configured to display the picture after the shooting distance adjustment if the adjustment operation for the shooting distance is responded.
Optionally, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual photographing apparatus 20 provided in the embodiment of the present application, the virtual photographing apparatus 20 further includes a storage module 240;
the saving module 240 is configured to perform virtual shooting processing on a game scene in response to a shooting instruction, obtain a first virtual shooting image, and then save the first virtual shooting image in the electronic album in response to a saving instruction;
the display module 210 is further configured to display the first virtual captured image if the viewing instruction for the electronic album is responded.
Alternatively, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual camera device 20 provided in the embodiment of the present application,
the shooting module 220 is specifically configured to perform virtual shooting processing on a game scene in response to a shooting instruction to obtain a first image;
carrying out depth of field processing on the first image to obtain a second image;
and adjusting the far transition area of the second image to obtain a first virtual shot image.
Alternatively, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual camera device 20 provided in the embodiment of the present application,
the shooting module 220 is specifically configured to perform virtual shooting processing on a game scene in response to a shooting instruction to obtain a third image;
carrying out depth of field processing on the third image to obtain a fourth image;
and adjusting the far transition area of the fourth image to obtain a second virtual shot image.
Alternatively, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual camera device 20 provided in the embodiment of the present application,
the shooting module 220 is specifically configured to perform virtual shooting processing on a game scene in response to a shooting instruction to obtain a first image to be processed;
scaling the first image to be processed to a target size to obtain a first sampling image;
and performing cropping processing on the first sampling image to obtain a first image to be processed.
Alternatively, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual camera device 20 provided in the embodiment of the present application,
the shooting module 220 is specifically configured to perform reduction processing on the first image to be processed by using an interpolation algorithm to obtain a first sampling image with a target size.
Alternatively, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual camera device 20 provided in the embodiment of the present application,
the shooting module 220 is specifically configured to perform virtual shooting processing on a game scene in response to a shooting instruction to obtain a second image to be processed;
scaling the second image to be processed to a target size to obtain a second sampling image;
and performing clipping processing on the second sampling image to obtain a second virtual shot image.
Alternatively, on the basis of the embodiment corresponding to fig. 26, in another embodiment of the virtual camera device 20 provided in the embodiment of the present application,
the shooting module 220 is specifically configured to perform reduction processing on the second image to be processed by using an interpolation algorithm to obtain a second sampling image with a target size.
As shown in fig. 27, for convenience of description, only the portions related to the embodiments of the present application are shown, and details of the specific technology are not disclosed, please refer to the method portion of the embodiments of the present application. The terminal device may be any terminal device including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a Point of Sales (POS), a vehicle-mounted computer, and the like, taking the terminal device as the mobile phone as an example:
fig. 27 is a block diagram illustrating a partial structure of a mobile phone related to a terminal device provided in an embodiment of the present application. Referring to fig. 27, the cellular phone includes: radio Frequency (RF) circuit 310, memory 320, input unit 330, display unit 340, sensor 350, audio circuit 360, wireless fidelity (WiFi) module 370, processor 380, and power supply 390. Those skilled in the art will appreciate that the handset configuration shown in fig. 27 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 27:
the RF circuit 310 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then processes the received downlink information to the processor 380; in addition, the data for designing uplink is transmitted to the base station. In general, the RF circuit 310 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuit 310 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 320 may be used to store software programs and modules, and the processor 380 executes various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 320. The memory 320 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 320 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 330 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 330 may include a touch panel 331 and other input devices 332. The touch panel 331, also referred to as a touch screen, can collect touch operations of a user (e.g., operations of the user on the touch panel 331 or near the touch panel 331 using any suitable object or accessory such as a finger, a stylus, etc.) on or near the touch panel 331, and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 331 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 380, and can receive and execute commands sent by the processor 380. In addition, the touch panel 331 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 330 may include other input devices 332 in addition to the touch panel 331. In particular, other input devices 332 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 340 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The Display unit 340 may include a Display panel 341, and optionally, the Display panel 341 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 331 can cover the display panel 341, and when the touch panel 331 detects a touch operation on or near the touch panel 331, the touch panel is transmitted to the processor 380 to determine the type of the touch event, and then the processor 380 provides a corresponding visual output on the display panel 341 according to the type of the touch event. Although in fig. 27, the touch panel 331 and the display panel 341 are two separate components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 331 and the display panel 341 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 350, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 341 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 341 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 360, speaker 361, microphone 362 may provide an audio interface between the user and the handset. The audio circuit 360 may transmit the electrical signal converted from the received audio data to the speaker 361, and the audio signal is converted by the speaker 361 and output; on the other hand, the microphone 362 converts the collected sound signals into electrical signals, which are received by the audio circuit 360 and converted into audio data, which are then processed by the audio data output processor 380 and then transmitted to, for example, another cellular phone via the RF circuit 310, or output to the memory 320 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 370, and provides wireless broadband internet access for the user. Although fig. 27 shows the WiFi module 370, it is understood that it does not belong to the essential constitution of the handset, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 380 is a control center of the mobile phone, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 320 and calling data stored in the memory 320. Optionally, processor 380 may include one or more processing units; optionally, processor 380 may integrate an application processor, which primarily handles operating systems, user interfaces, application programs, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 380.
The handset also includes a power supply 390 (e.g., a battery) for powering the various components, optionally, the power supply may be logically connected to the processor 380 through a power management system, so that the power management system may be used to manage charging, discharging, and power consumption.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
The steps performed by the terminal device in the above-described embodiment may be based on the terminal device configuration shown in fig. 27.
Embodiments of the present application also provide a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the method described in the foregoing embodiments.
Embodiments of the present application also provide a computer program product including a program, which, when run on a computer, causes the computer to perform the methods described in the foregoing embodiments.
It is understood that, in the specific implementation of the present application, related data such as user operation information, etc. when the above embodiments of the present application are applied to specific products or technologies, user permission or consent needs to be obtained, and the collection, use and processing of related data need to comply with related laws and regulations and standards of related countries and regions.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (25)

1. A method of virtually photographing, comprising:
responding to an object invitation instruction, and displaying a first adding control on a preset area when a first virtual object is in a non-interactive state, wherein the first virtual object is a virtual object controlled by a target object;
responding to touch operation of the first adding control, and displaying a first object subpage, wherein the first object subpage comprises K virtual objects, and K is an integer greater than 1;
in response to a selection operation for a plurality of second virtual objects in the first object subpage, displaying the plurality of second virtual objects and the first virtual object, wherein the plurality of second virtual objects are included in the K virtual objects;
and responding to a shooting instruction, performing virtual shooting processing on a game scene to obtain a first virtual shooting image, wherein the game scene comprises the plurality of second virtual objects and the first virtual object.
2. The method of claim 1, wherein after the responding to the object invitation instruction, the method further comprises:
when the first virtual object is in an interactive state, displaying a second adding control on an interactive object, wherein the interactive object is an object supporting interaction with the virtual object;
responding to the touch operation aiming at the second adding control, and displaying a second object sub-page;
in response to a selection operation for a third virtual object in the second object subpage, displaying the third virtual object and the first virtual object;
and responding to a shooting instruction, performing virtual shooting processing on a game scene to obtain a second virtual shooting image, wherein the game scene comprises the third virtual object and the first virtual object.
3. The method of claim 1, wherein said displaying a first object subpage comprises:
displaying T virtual objects on the first object sub-page, wherein each virtual object in the T virtual objects is a player character, and T is an integer greater than or equal to 1 and less than or equal to K.
4. The method of claim 3, wherein displaying the plurality of second virtual objects and the first virtual object in response to a selection operation for the plurality of second virtual objects in the first object subpage comprises:
displaying object invitation controls corresponding to the plurality of second virtual objects on the first object sub-page;
responding to touch operation aiming at the object invitation control, and sending a co-illumination request to target terminal equipment, wherein the target terminal equipment is the terminal equipment for controlling the plurality of second virtual objects;
and if the photo matching request response is successful, displaying the plurality of second virtual objects and the first virtual object.
5. The method of claim 1, wherein said displaying a first object subpage comprises:
displaying T virtual objects on the first object sub-page, wherein each virtual object in the T virtual objects is a non-player character, and T is an integer greater than or equal to 1 and less than or equal to K.
6. The method of claim 5, wherein displaying the plurality of second virtual objects and the first virtual object in response to a selection operation for the plurality of second virtual objects in the first object subpage comprises:
displaying object invitation controls corresponding to the second virtual objects on the first object sub-page, wherein the interaction values between the second virtual objects and the first virtual object are greater than or equal to an interaction threshold value;
in response to a touch operation for the object invitation control, displaying the plurality of second virtual objects and the first virtual object.
7. The method of claim 1, wherein after displaying the plurality of second virtual objects and the first virtual object, the method further comprises:
responding to an action setting instruction for a second virtual object, and displaying a first action subpage, wherein the first action subpage comprises an action executable by the second virtual object, and the second virtual object is a non-player character;
in response to a selection operation for a first action in the first action sub-page, controlling the second virtual object to execute the first action;
alternatively, the first and second electrodes may be,
after the displaying the plurality of second virtual objects and the first virtual object, the method further comprises:
in response to an action setting instruction for the first virtual object, displaying a second action subpage, wherein the second action subpage comprises an action executable by the first virtual object;
and controlling the first virtual object to execute the second action in response to the selection operation of the second action in the second action sub-page.
8. The method of claim 1, wherein after displaying the plurality of second virtual objects and the first virtual object, the method further comprises:
responding to a handheld setting instruction for a second virtual object, and displaying a first handheld sub page, wherein the first handheld sub page comprises a handheld object which can be used by the second virtual object, and the second virtual object is a non-player character;
in response to a selection operation for a first handheld object in the first handheld sub-page, displaying the second virtual object holding the first handheld object;
alternatively, the first and second electrodes may be,
after the displaying the plurality of second virtual objects and the first virtual object, the method further comprises:
displaying a second handheld sub page in response to a handheld setting instruction for the first virtual object, wherein the second handheld sub page comprises a handheld object usable by the first virtual object;
and displaying the first virtual object holding a second handheld object in the second handheld sub-page in response to a selection operation of the second handheld object.
9. The method of claim 1, wherein after displaying the plurality of second virtual objects and the first virtual object, the method further comprises:
responding to a mood setting instruction aiming at a second virtual object, and displaying a first mood sub-page, wherein the first mood sub-page comprises the mood which can be expressed by the second virtual object, and the second virtual object is a non-player character;
in response to a selection operation for a first mood in the first mood sub-page, displaying the second virtual object having the first mood;
alternatively, the first and second electrodes may be,
after the displaying the plurality of second virtual objects and the first virtual object, the method further comprises:
in response to a mood setting instruction for the first virtual object, displaying a second mood sub-page, wherein the second mood sub-page comprises a mood that the first virtual object can represent;
in response to a selection operation for a second mood in the second mood sub-page, displaying the first virtual object having the second mood.
10. The method of claim 1, wherein after displaying the plurality of second virtual objects and the first virtual object, the method further comprises:
responding to a special effect display instruction, and displaying a scene special effect sub-page, wherein the scene special effect sub-page comprises a playable scene special effect;
and responding to the selection operation aiming at the target scene special effect in the scene special effect sub page, and playing the target scene special effect.
11. The method of claim 1, further comprising:
responding to a shooting setting instruction, and displaying M object controls, wherein each object control corresponds to a virtual object of one class, and M is an integer greater than or equal to 1;
if the first virtual object is not displayed, displaying the first virtual object, wherein the first object control is included in the M object controls, and the first object control corresponds to the first virtual object;
if the first selection operation for a second object control is responded, canceling the display of the associated virtual object, wherein the associated virtual object and the first virtual object belong to the same game team, the second object control is contained in the M object controls, and the second object control corresponds to the associated virtual object;
if the first selection operation aiming at a third object control is responded, displaying a non-associated virtual object is cancelled, wherein a friend relationship is not established between the non-associated virtual object and the first virtual object, the third object control is contained in the M object controls, and the third object control corresponds to the non-associated virtual object;
and if the first selection operation aiming at a fourth object control is responded, canceling the display of the non-player character, wherein the fourth object control is contained in the M object controls, and the fourth object control corresponds to the non-player character.
12. The method of claim 11, wherein after displaying the M object controls, the method further comprises:
if the first virtual object is displayed in response to a second selection operation of the first object control;
if the second selection operation for the second object control is responded, displaying the associated virtual object;
if the second selection operation aiming at the third object control is responded, the non-associated virtual object is displayed;
displaying the non-player character if responding to a second selection operation for the fourth object control.
13. The method of claim 1, further comprising:
responding to a shooting setting instruction, and displaying a lens following control;
and if the lens following control is responded to the selection operation of the lens following control, controlling the sight line direction of the first virtual object to face a virtual camera, wherein the virtual camera is used for carrying out virtual shooting processing.
14. The method of claim 1, further comprising:
if the adjustment operation for the shooting angle is responded, displaying the picture after the adjustment of the shooting angle;
and if the adjustment operation for the shooting distance is responded, displaying the picture after the shooting distance is adjusted.
15. The method according to any one of claims 1 to 14, wherein after the virtual shooting processing is performed on the game scene in response to the shooting instruction, and the first virtual shooting image is obtained, the method further comprises:
responding to a saving instruction, and saving the first virtual shooting image to an electronic album;
and if the viewing instruction aiming at the electronic album is responded, displaying the first virtual shooting image.
16. The method of claim 1, wherein the virtually shooting the game scene in response to the shooting instruction to obtain the first virtually shot image comprises:
responding to a shooting instruction, and carrying out virtual shooting processing on a game scene to obtain a first image;
carrying out depth of field processing on the first image to obtain a second image;
and adjusting the far transition region of the second image to obtain the first virtual shot image.
17. The method of claim 2, wherein the virtually shooting the game scene in response to the shooting instruction to obtain a second virtually shot image comprises:
responding to the shooting instruction, and performing virtual shooting processing on the game scene to obtain a third image;
performing depth of field processing on the third image to obtain a fourth image;
and adjusting the far transition region of the fourth image to obtain the second virtual shot image.
18. The method of claim 1, wherein the virtually shooting the game scene in response to the shooting instruction to obtain the first virtually shot image comprises:
responding to a shooting instruction, and performing virtual shooting processing on a game scene to obtain a first image to be processed;
scaling the first image to be processed to a target size to obtain a first sampling image;
and performing cropping processing on the first sampling image to obtain the first image to be processed.
19. The method of claim 18, wherein scaling the first to-be-processed image to a target size to obtain a first sampled image comprises:
and carrying out reduction processing on the first image to be processed by adopting an interpolation algorithm to obtain the first sampling image with the target size.
20. The method of claim 2, wherein the virtually shooting the game scene in response to the shooting instruction to obtain a second virtually shot image comprises:
responding to the shooting instruction, and performing virtual shooting processing on the game scene to obtain a second image to be processed;
scaling the second image to be processed to a target size to obtain a second sampling image;
and performing cropping processing on the second sampling image to obtain the second virtual shooting image.
21. The method of claim 20, wherein scaling the second to-be-processed image to a target size to obtain a second sampled image comprises:
and carrying out reduction processing on the second image to be processed by adopting an interpolation algorithm to obtain the second sampling image with the target size.
22. A virtual camera apparatus, comprising:
the display module is used for responding to an object invitation instruction, and displaying a first adding control on a preset area when a first virtual object is in a non-interactive state, wherein the first virtual object is a virtual object controlled by a target object;
the display module is further configured to display a first object subpage in response to a touch operation for the first adding control, where the first object subpage includes K virtual objects, and K is an integer greater than 1;
the display module is further configured to display a plurality of second virtual objects and the first virtual object in response to a selection operation for the plurality of second virtual objects in the first object subpage, where the plurality of second virtual objects are included in the K virtual objects;
and the shooting module is used for responding to a shooting instruction and carrying out virtual shooting processing on a game scene to obtain a first virtual shooting image, wherein the game scene comprises the plurality of second virtual objects and the first virtual object.
23. A terminal device, comprising: a memory, a processor, and a bus system;
wherein the memory is used for storing programs;
the processor for executing the program in the memory, the processor for performing the method of any one of claims 1 to 21 according to instructions in program code;
the bus system is used for connecting the memory and the processor so as to enable the memory and the processor to communicate.
24. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any of claims 1 to 21.
25. A computer program product comprising a computer program and instructions, characterized in that the computer program/instructions, when executed by a processor, implement the method according to any of claims 1 to 21.
CN202210013318.8A 2022-01-06 2022-01-06 Virtual photographing method, related device, equipment and storage medium Pending CN114392565A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210013318.8A CN114392565A (en) 2022-01-06 2022-01-06 Virtual photographing method, related device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210013318.8A CN114392565A (en) 2022-01-06 2022-01-06 Virtual photographing method, related device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114392565A true CN114392565A (en) 2022-04-26

Family

ID=81228926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210013318.8A Pending CN114392565A (en) 2022-01-06 2022-01-06 Virtual photographing method, related device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114392565A (en)

Similar Documents

Publication Publication Date Title
CN112037311B (en) Animation generation method, animation playing method and related devices
CN108984087B (en) Social interaction method and device based on three-dimensional virtual image
JP2023517917A (en) VIRTUAL SCENE DISPLAY METHOD, APPARATUS, DEVICE, AND COMPUTER PROGRAM
CN114205324B (en) Message display method, device, terminal, server and storage medium
US10698579B2 (en) Method, device for displaying reference content and storage medium thereof
CN113350793B (en) Interface element setting method and device, electronic equipment and storage medium
CN112261481B (en) Interactive video creating method, device and equipment and readable storage medium
CN108876878B (en) Head portrait generation method and device
CN112891943B (en) Lens processing method and device and readable storage medium
CN112121415A (en) Method, device and equipment for controlling interface display and storage medium
CN113101657B (en) Game interface element control method, game interface element control device, computer equipment and storage medium
CN108632413A (en) A kind of photographic method and mobile terminal
CN112156454B (en) Virtual object generation method and device, terminal and readable storage medium
CN111589168B (en) Instant messaging method, device, equipment and medium
US20230347240A1 (en) Display method and apparatus of scene picture, terminal, and storage medium
CN111617472A (en) Method and related device for managing model in virtual scene
CN114392565A (en) Virtual photographing method, related device, equipment and storage medium
CN112245914B (en) Viewing angle adjusting method and device, storage medium and computer equipment
CN114225412A (en) Information processing method, information processing device, computer equipment and storage medium
CN114327197B (en) Message sending method, device, equipment and medium
CN115193043A (en) Game information sending method and device, computer equipment and storage medium
WO2020248682A1 (en) Display device and virtual scene generation method
CN114995924A (en) Information display processing method, device, terminal and storage medium
CN112291133B (en) Method, device, equipment and medium for sending files in cross-terminal mode
CN113426115A (en) Game role display method and device and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40071017

Country of ref document: HK