CN116774902A - Virtual camera configuration method, device, equipment and storage medium - Google Patents

Virtual camera configuration method, device, equipment and storage medium Download PDF

Info

Publication number
CN116774902A
CN116774902A CN202310587882.5A CN202310587882A CN116774902A CN 116774902 A CN116774902 A CN 116774902A CN 202310587882 A CN202310587882 A CN 202310587882A CN 116774902 A CN116774902 A CN 116774902A
Authority
CN
China
Prior art keywords
virtual camera
sequence
virtual
configuration
response
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310587882.5A
Other languages
Chinese (zh)
Inventor
李想
赵潇滨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310587882.5A priority Critical patent/CN116774902A/en
Publication of CN116774902A publication Critical patent/CN116774902A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A configuration method, device, equipment and storage medium of a virtual camera relate to the technical field of computers and Internet. The method comprises the following steps: displaying a user interface of the mirror transport tool, wherein an import control is displayed in the user interface; in response to an operation for the import control, displaying a virtual camera sequence generated from the import data; displaying configuration parameters of a first virtual camera of the at least one virtual camera in response to an operation for the first virtual camera; in response to a configuration save operation for the virtual camera sequence, configuration information for the virtual camera sequence is saved. By importing data into the mirror tool, the mirror tool automatically generates and displays a virtual camera sequence. The virtual camera is not required to be manually adjusted by an animation producer, and the position and the motion trail of the virtual camera are determined according to the picture shot by the virtual camera, so that the process of configuring the virtual camera is simplified, the operation is simple, and the efficiency of configuring the virtual camera is improved.

Description

Virtual camera configuration method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers and the Internet, in particular to a configuration method, a device, equipment and a storage medium of a virtual camera.
Background
The virtual camera may capture a virtual object in a virtual scene to generate an animation associated with the virtual object.
In the related art, a configuration method of a virtual camera is provided, wherein an animation producer acquires a preview picture obtained by shooting the virtual camera by adjusting the position and the motion trail of the virtual camera, and continuously adjusts the position and the motion trail of the virtual camera according to the effect of the preview picture until the position and the motion trail of the virtual camera are determined to be proper, so that the configuration of the virtual camera is completed.
However, the above configuration method for the virtual camera is complex in operation and low in efficiency.
Disclosure of Invention
The embodiment of the application provides a configuration method, a device, equipment and a storage medium of a virtual camera. The technical scheme provided by the embodiment of the application is as follows.
According to an aspect of an embodiment of the present application, there is provided a method for configuring a virtual camera, the method including:
displaying a user interface of the mirror transport tool, wherein an import control is displayed in the user interface;
In response to an operation for the import control, displaying a virtual camera sequence generated according to import data, wherein the virtual camera sequence comprises at least one virtual camera, and the import data is used for defining the type and the sequence of each virtual camera;
in response to an operation for a first virtual camera of the at least one virtual camera, displaying configuration parameters of the first virtual camera, the configuration parameters being used to determine a mirror rule of the virtual camera in a virtual scene, and the configuration parameters being automatically generated by the mirror tool from the imported data;
and storing configuration information of the virtual camera sequence in response to a configuration storage operation for the virtual camera sequence, wherein the configuration information comprises configuration parameters of each virtual camera contained in the virtual camera sequence.
According to an aspect of an embodiment of the present application, there is provided a configuration apparatus of a virtual camera, the apparatus including:
the first display module is used for displaying a user interface of the mirror tool, and an import control is displayed in the user interface;
a second display module for displaying a virtual camera sequence generated according to import data in response to an operation for the import control, the virtual camera sequence including at least one virtual camera, the import data defining a kind and an order of each of the virtual cameras;
A third display module for displaying configuration parameters of a first virtual camera of the at least one virtual camera in response to an operation for the first virtual camera, the configuration parameters being used to determine a mirror rule of the virtual camera in a virtual scene, and the configuration parameters being automatically generated by the mirror tool according to the import data;
and the storage module is used for responding to the configuration storage operation of the virtual camera sequence and storing the configuration information of the virtual camera sequence, wherein the configuration information comprises the configuration parameters of each virtual camera contained in the virtual camera sequence.
According to an aspect of an embodiment of the present application, there is provided a terminal device including a processor and a memory, the memory storing a computer program, the processor being configured to execute the computer program to implement the method for configuring a virtual camera described above.
According to an aspect of an embodiment of the present application, there is provided a computer-readable storage medium having stored therein a computer program loaded and executed by a processor to implement the above-described configuration method of a virtual camera.
According to an aspect of an embodiment of the present application, there is provided a computer program product comprising a computer program loaded and executed by a processor to implement the above-described virtual camera configuration method.
The technical scheme provided by the embodiment of the application can bring the following beneficial effects:
by importing data into the mirror tool, the mirror tool automatically generates and displays a virtual camera sequence to complete the configuration of the virtual camera. The virtual camera is not required to be manually adjusted by an animation producer, and the position and the motion trail of the virtual camera are determined according to the picture shot by the virtual camera, so that the process of configuring the virtual camera is simplified, the operation is simple, and the efficiency of configuring the virtual camera is improved.
Drawings
FIG. 1 is a schematic illustration of an implementation environment for an embodiment of the present application;
FIG. 2 is a schematic diagram of a keyframe provided by one embodiment of the present application;
FIG. 3 is a flow chart of a method of configuring a virtual camera provided by one embodiment of the present application;
FIG. 4 is a schematic diagram of a user interface of a mirror tool provided in accordance with one embodiment of the present application;
FIG. 5 is a schematic diagram of a user interface of a game engine provided in one embodiment of the application;
FIG. 6 is a schematic diagram of a file selection interface provided by one embodiment of the present application;
FIG. 7 is a schematic diagram of adding a virtual camera provided by one embodiment of the application;
FIG. 8 is a schematic diagram of a preview virtual camera provided by one embodiment of the present application;
FIG. 9 is a schematic diagram of a migrated virtual camera sequence provided by one embodiment of the present application;
FIG. 10 is a schematic diagram of a cropped virtual camera provided by one embodiment of the application;
FIG. 11 is a block diagram of a common preset add-on module of a mirror tool provided in accordance with one embodiment of the present application;
FIG. 12 is a block diagram of a lens and sequence creation module of a mirror tool provided by one embodiment of the present application;
FIG. 13 is a block diagram of a quick clip and preview module of a mirror tool provided in accordance with one embodiment of the present application;
FIG. 14 is a block diagram of a configuration apparatus of a virtual camera provided in one embodiment of the present application;
fig. 15 is a block diagram of a configuration apparatus of a virtual camera according to another embodiment of the present application;
fig. 16 is a block diagram of a terminal device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Before describing embodiments of the present application, some terms involved in the present application will be first described.
1. Game engine: a software development environment designed for creating real-time interactive contents is mainly used for developing electronic games in early stages, and is now widely used in other fields, such as animation production, special effect production, etc.
2. Real-time animation pipeline: the computer calculates the picture and outputs and displays the picture, and the frame rate reaches at least 25 frames and above so that the human eyes feel coherent and real-time. The game engine allows teams to quickly obtain high resolution assets, physical-based simulations and movements, realistic textures and cloths, a wide variety of particle effects, as well as cameras and complex lighting. The method has the advantages of real-time control, rapid iteration and rapid creative decision making which cannot be realized in the past.
3. Virtual camera: the camera in the game engine can be manually controlled by a user, directly assigned or driven by a mobile terminal device (such as a mobile phone, a tablet and the like). In some embodiments, the virtual camera may also be referred to as a virtual video camera, virtual lens, virtual camera, etc., as the application is not limited in this regard.
4. Sequence r: the multi-track editor of the illusion engine is used for creating and previewing the cutscene sequence in real time. By creating a Level Sequences (Level Sequences) and adding Tracks (Tracks), the user can define the composition of the individual Tracks and thus determine the content of the scene. The Track may contain Animation (Animation) for animating characters, deformation (Transformation) for moving everything in the scene, audio (Audio) for including music or sound effects, several other Track (Track) types, etc.
5. Nonlinear clipping: is a modern splicing approach in the post-production of movies and television, which means that any one frame in a video clip can be accessed.
6. Lens language: it is the story that is told with the lens. From the change of the shot subject and picture, the content to be expressed by the shot person through the lens is felt.
7. Animation: is a form of expression. Animation is a comprehensive art, which is an artistic expression form integrating a plurality of artistic categories such as painting, cartoon, movie, digital media, photography, music, literature and the like.
8. Video: is a playing mode. Video generally refers to various techniques for capturing, recording, processing, storing, transmitting, and reproducing a series of still images as electrical signals. When the continuous image changes more than 24 frames of pictures per second, according to the persistence of vision principle, the human eyes cannot distinguish a single static picture; it appears as a smooth continuous visual effect, such that successive pictures are called videos. Animation and video are also important media forms in multimedia technology, and have very deep sources. Animation and video are often considered to be the same thing, mainly because they all fall into the category of "dynamic images". The dynamic image is a continuously progressive still image or a sequence of graphics, which are sequentially displayed in a changing manner along the time axis, thereby producing a media format for the motion vision perception. However, animation and video are in fact two different concepts. Each frame of the image of the animation is generated manually or by a computer. The still image frames are sequentially played at a speed of 15 frames/second to 20 frames/second according to the characteristics of the human eyes, and a sense of movement is generated. Each frame of image of the video is obtained by capturing a natural scene or moving object in real time. The video signal may be generated by a continuous image signal input device such as a video camera, video recorder, or the like.
Referring to fig. 1, a schematic diagram of an implementation environment of an embodiment of the present application is shown. The solution implementation environment can be implemented as an architecture of a configuration system for a virtual camera. The implementation environment of the scheme can comprise: a terminal device 100 and a server 200.
The terminal device 100 may be an electronic device such as a PC (Personal Computer ), a tablet, a cell phone, a wearable device, an in-vehicle terminal, or the like. A client running a target application, which may be an application having an animation function, such as a game engine, may be installed in the terminal device 100. Illustratively, the target application is a game engine that can run a mirror tool, and configure the virtual camera according to the mirror tool to generate the virtual camera sequence. The mirror tool can be based on a plug-in the target application program or can be independent of the application program outside the target application program. In addition, the present application is not limited to the form of the target Application program, including but not limited to App (Application), applet, etc. installed in the terminal device 100, but may also be in the form of a web page.
The server 200 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing services. The server 200 may be a background server of the target application program, and is configured to provide a background service for a client of the target application program.
Communication between the terminal device 100 and the server 200 may be performed through a network, such as a wired or wireless network.
In the method for configuring a virtual camera according to the embodiment of the present application, the execution subject of each step may be a terminal device, and taking the implementation environment of the solution shown in fig. 1 as an example, the method for configuring a virtual camera may be executed by the terminal device 100, for example, a client of a target application installed and running in the terminal device 100 executes the method for configuring a virtual camera, and for example, a mirror tool installed and running in the terminal device 100 executes the method for configuring a virtual camera.
In the related art, to configure a virtual camera sequence, a designer first needs to put forward a mirror idea to an animation producer, and the animation producer manually records the animation of the virtual camera in the animation sequence in the form of a key frame in a game engine, so that complex animation motions need to be repeatedly calculated, adjusted, smoothed and other operations. The planner and the animation producer can communicate with each other for multiple rounds and repeatedly modify the animation, so that the production of the lens animation can be finished finally.
As shown in fig. 2, a column with punctual identifiers therein represents a key frame, such as key frame 210, wherein a column of punctual identifiers is used to represent the location and configuration parameters of the virtual camera in the virtual scene in the key frame. For example, the punctuation marks in the keyframe 210 represent, in order from top to bottom, the x-axis coordinates, y-axis coordinates, z-axis coordinates, roll parameters, pitch parameters, yaw parameters, scale parameters of the virtual camera in the virtual scene. Each key frame may set one or more of the parameters described above, e.g., only y-axis coordinates and z-axis coordinates are set in key frame 220. The planners and animation makers often judge how to set a certain machine position and a certain motion path through pictures obtained by shooting by the virtual camera, and then manually set key frames as shown in fig. 2, so as to complete the configuration of a virtual camera sequence.
However, in the virtual scene, the position information and the information of the relative relation between the virtual object and the virtual camera can be obtained in real time, the logic time and effort for designing the lens language by the picture watching effect are consumed, and the natural advantage of the virtual environment is not utilized. And the lens language of a section of animation often needs to be modified and iterated many times to be finally determined, if the key frame of a section of animation of the virtual camera is already set, and the whole lens is required to be moved backwards by a certain distance, each key frame needs to be modified, and the iteration cost is high and repeated communication with animation production needs to be planned.
The embodiment of the application provides a configuration method of a virtual camera, which can import data into a mirror transporting tool, determine configuration parameters of the virtual camera according to the imported data, and further generate a virtual camera sequence. And determining configuration parameters of the virtual camera by utilizing the relation between the positions of the virtual object and the virtual camera in the virtual scene and the shooting picture of the virtual camera, and automatically generating a virtual camera sequence without manual adjustment of an animation producer.
Referring to fig. 3, a flowchart of a method for configuring a virtual camera according to an embodiment of the application is shown. The method may be performed by the terminal device 100 shown in fig. 1. The method may include at least one of the following steps 310-340.
Step 310, a user interface of the mirror tool is displayed, with an import control displayed in the user interface.
The mirror tool refers to a tool for configuring the virtual camera, and the mirror tool may be a separate application program or may be a plug-in based on a target application program (such as a game engine), which is not limited in this application.
The user interface of the mirror tool refers to an interface for human-machine interaction in which an animator can configure a virtual camera to generate a virtual camera sequence.
If the mirror tool is a game engine-based plug-in, the user interface of the mirror tool may be a user interface independent of the game engine, or may be an interface area displayed in the user interface of the game engine, which is not limited by the present application. Illustratively, the user interface of the mirror tool may be a separate user interface 400 as shown in FIG. 4, or may be one of the interface regions 510 in the user interface of the game engine as shown in FIG. 5.
At least one control used for configuring the virtual camera is displayed in the user interface of the mirror transporting tool, and an animation producer can realize operations of adding, deleting, modifying, shearing, migrating and the like of the virtual camera according to each control included in the user interface.
In some embodiments, an import control is displayed in the user interface for importing data into the mirror tool. Illustratively, as shown in FIG. 4, an import control 401 is displayed in the user interface 400.
In response to the operation for the import control, a virtual camera sequence generated from import data is displayed, the virtual camera sequence including at least one virtual camera, the import data defining a category and an order of the respective virtual cameras, step 320.
In some embodiments, the imported data is imported into the mirror tool in a pre-set canonical form. The present application is not limited to the canonical form. For example, the canonical form may be in the form of a table, or may be in the form of text.
As shown in table 1, the imported data may include the following, for example, in the form of a table in the canonical form.
The first 3 columns of the plan description, time and range shown in table 1 are normalized descriptions by the animator countermeasure description for the animation ideas that the planner provides to the animator, to obtain the table composed of the last six columns shown in table 1. The six tables in the last column of table 1 are the imported data in the standard form.
In some embodiments, the category of the virtual camera is determined by the class of the mirror and the name of the mirror, and the order of the virtual cameras is in turn determined by the start frame and the end frame of the virtual camera.
The lens class refers to classification obtained according to lens rules of the virtual camera, and can comprise fixed positions, basic lens and programmed lens generation. The fixed camera position means that the virtual camera is in a fixed position for shooting. The basic moving mirror means that the virtual camera shoots according to basic movement rules, for example, the virtual camera shoots in a push-pull mode. The programmatically generating the lens refers to a relatively complex lens-moving method, such as a circular surrounding lens-moving method, in which the virtual camera performs circular surrounding shooting with the shooting object as the center.
The mirror name refers to the name of the mirror mode of the virtual camera. For example, the long-range view is a long-range view of a subject, in which a virtual camera photographs at a fixed position and the photographed image is a captured image. The close-up is that the virtual camera shoots at a fixed position, and the shot picture is the close-up picture of the shooting object. Push-pull means that the virtual camera shoots a shooting object in a push-pull manner according to a motion track.
Table 1: importing data examples
The frame starting is used for determining the position when the virtual camera starts shooting, the frame dropping is used for determining the position when the virtual camera ends shooting, the starting frame refers to the time stamp when the virtual camera starts shooting, and the ending frame refers to the time stamp when the virtual camera ends shooting.
Of course, the imported data may include not only the class of the mirror, the name of the mirror, the starting frame, the falling frame, the starting frame, the ending frame, but also other data information of the virtual camera, which is not limited in the present application. For example, the imported data further includes a subject of the virtual camera. The photographed object refers to a virtual object photographed by a virtual camera in a virtual scene. The virtual object may be a virtual character in a virtual scene or a virtual object in a virtual scene, which is not limited in the present application. For example, the virtual object may be a game character or a virtual object in a game scene.
In some embodiments, the imported data may or may not include the shooting objects corresponding to the virtual cameras.
For example, if the photographed objects of the virtual cameras in the virtual camera sequence are the same (for example, the photographed objects of the virtual cameras in the virtual camera sequence are all virtual object 1), the imported data may include the photographed objects corresponding to the virtual cameras, or may not include the photographed objects corresponding to the virtual cameras, or may be set in a unified manner.
For example, if there is a difference in the photographed objects of the respective virtual cameras in the virtual camera sequence (for example, the photographed objects of the respective virtual cameras in the virtual camera sequence include the virtual object 1, the virtual object 2, and the virtual object 3), the photographed objects respectively corresponding to the respective virtual cameras may be included in the imported data. The virtual camera sequence is a virtual camera arranged in a shooting time sequence, and the virtual camera sequence comprises at least one virtual camera, and pictures shot by each virtual camera can obtain a video. Illustratively, as shown in table 1, the respective virtual cameras included in the virtual camera sequence are arranged in the order of the start frame and the end frame shown in table 1.
In some embodiments, virtual cameras may be categorized according to their mirror rules.
For example, a virtual camera is divided into a fixed lens and a dynamic lens according to whether it moves.
For example, the virtual cameras may be classified according to the above-mentioned classes of the mirrors, and the classes of the virtual cameras may be classified into fixed positions, basic mirrors, and programmed mirrors.
For example, the virtual cameras may be divided according to the names of the above-mentioned fortune mirrors, and the kinds of the virtual cameras may be divided into long-range view, short-range view, medium-range view, push-pull view, circular surround view, and the like.
In some embodiments, the import data is imported into the mirror tool in the form of a file, for example, the import data is imported into the mirror tool in the form of an excel table file.
In some embodiments, in response to an operation on the import control, a file selection interface is displayed in which at least one file is displayed; and in response to the selection operation on the target file, importing the data stored in the target file into the mirror carrier.
Illustratively, in response to operation of the import control 401 in FIG. 4, a file selection interface 600 is displayed as shown in FIG. 6, in which the user may select a target file. In response to a selection operation for the target file 610, the data stored in the target file 610 is imported into the mirror tool.
Illustratively, as shown in FIG. 4, a virtual camera sequence 402 is generated and displayed from the imported data, the virtual camera sequence 402 including at least one virtual camera therein. In some embodiments, the sequence of virtual cameras is displayed in the order of the virtual cameras. For example, taking the virtual camera shown in table 1 as an example, the virtual camera sequence is displayed in the order of fixed machine position-perspective, base fortune mirror-push-pull, fixed machine position-perspective, programmatically generated fortune mirror-circular surround, fixed machine position-perspective, base fortune mirror-push-pull.
In some embodiments, the type of virtual camera may be determined according to the mirror name included in the imported data, the order of the virtual cameras may be determined according to the start frame and the end frame included in the imported data, and thus the configuration parameters of the virtual cameras may be determined.
In some embodiments, the pictures taken by the virtual camera in the same lens-transporting mode are similar in picture composition, for example, the virtual camera adopts a close-up lens-transporting mode to take the eyes of the shooting object, and then in the virtual scene, after determining the position of the shooting object of the virtual camera, the position of the virtual camera can be determined according to the relationship between the distance between the virtual camera and the shooting object and the pictures taken by the virtual camera. For example, in the case where the distance between the virtual camera and the subject is 10m, the head of the subject is included in the screen captured by the virtual camera, and in the case where the distance between the virtual camera and the subject is 5m, the half-face of the subject is included in the screen captured by the virtual camera, based on which the position of the virtual camera can be easily determined.
In some embodiments, the order of the individual virtual cameras may be determined from the start and end frames in the imported data.
By the method, the virtual camera sequence can be automatically generated according to the imported data, animation production personnel are not required to manually adjust the position of the virtual camera, the operation is simple, and the efficiency is high.
In some embodiments, import data is obtained in response to an operation on an import control; for a virtual camera defined by the imported data, determining configuration parameters of the virtual camera according to the type of the virtual camera and the position information of a shooting object; and generating and displaying a virtual camera sequence according to the sequence of each virtual camera defined by the imported data and the configuration parameters of each virtual camera.
In some embodiments, the location information of the shooting object may be location information of the shooting object in the virtual scene, or may be bone information of the shooting object. The bone information of the photographic subject may include bone composition of the photographic subject, position information of the bone of the photographic subject in the virtual scene, bone orientation of the photographic subject.
In some embodiments, if the type of the virtual camera is a fixed lens, a configuration parameter of the virtual camera is determined according to the position information of the shooting object, where the configuration parameter of the virtual camera includes a position of the virtual camera.
In some embodiments, if the type of the virtual camera is a dynamic lens, the configuration parameters of the virtual camera are determined according to the position information of the shooting object, and the configuration parameters of the virtual camera include the position and the motion trail of the virtual camera.
The virtual camera is illustratively a fixed-camera-position long-range camera, and the position of the virtual camera is determined according to the type of the virtual camera and the position information of the shooting object.
The virtual camera generates a mirror circular surrounding camera for programming, and the position of the virtual camera and the movement track of the virtual camera are determined according to the type of the virtual camera and the position information of a shooting object. The motion trail of the virtual camera can be expressed by spline lines and spline points. Spline refers to a line presented in the preview screen for describing the motion trail of the virtual camera, and spline points refer to points on the spline for describing key frames of the motion trail of the virtual camera.
In some embodiments, the configuration parameters of the virtual camera are determined according to the type of the virtual camera, the position information of the photographing object, and the orientation of the photographing object.
In response to the operation for a first virtual camera of the at least one virtual camera, configuration parameters of the first virtual camera are displayed, the configuration parameters being used to determine a mirror rule for the virtual camera in the virtual scene, and the configuration parameters being automatically generated by the mirror tool from the imported data, step 330.
Illustratively, as shown in FIG. 4, in response to an operation for a first virtual camera 402a of the at least one virtual camera, configuration parameters 403 of the first virtual camera are displayed. The configuration parameters of the first virtual camera are used to determine a mirror rule of the virtual camera in the virtual scene, which may include at least one of: the position of the virtual camera, the motion track, the number of spline points, the running speed, the motion radius, whether the motion is reversed, the spline proportion, the shooting object, the amplitude and the like.
The spline of the virtual camera refers to the motion trail of the virtual camera, and the spline of the virtual camera refers to a key frame on the motion trail of the virtual camera. The radius of motion is used to describe the range of motion of the virtual camera.
In some embodiments, the user interface also displays therein a diagrammatic configuration parameter of the first virtual camera, the detailed configuration parameter of the first virtual camera being displayed in response to operation of the first virtual camera with respect to the at least one virtual camera. Illustratively, the diagrammatic configuration parameters of the first virtual camera may be displayed in a first area 404 of the user interface 400, and the detailed configuration parameters 403 of the first virtual camera are displayed in the user interface 400 in response to operation for the first virtual camera.
In some embodiments, the outline configuration parameters correspond to simple mirror rules of the virtual camera, such as the position of the virtual camera, the rotation angle of the virtual camera, etc.; the detailed configuration parameters correspond to the responsible mirror rules of the virtual camera, such as the number of spline points, spline proportions, etc. of the virtual camera.
In some embodiments, the configuration parameters of the virtual camera are automatically generated by the mirror tool based on the imported data without manual setting by the animator.
In response to the configuration save operation for the virtual camera sequence, step 340, configuration information for the virtual camera sequence is saved, the configuration information including configuration parameters for each virtual camera included in the virtual camera sequence.
In some embodiments, the generated virtual camera sequence may be saved and applied in an animation sequence.
In some embodiments, a save control is also displayed in the user interface that saves configuration information for the virtual camera sequence in response to a configuration save operation for the virtual camera sequence (e.g., clicking on the save control after selecting the virtual camera sequence).
Illustratively, as shown in FIG. 4, the configuration information for the virtual camera sequence is saved in response to a configuration save operation for the virtual camera sequence (e.g., after the virtual camera sequence is selected, the save control 405 is clicked). For example, a path selection interface as shown in fig. 6 may be displayed, and the user may select a target storage path in the path selection interface, and store configuration information of the virtual camera sequence into the target storage path in response to a selection operation for the target storage path. The target storage path is determined in the path selection interface.
According to the technical scheme provided by the embodiment of the application, the mirror tool automatically generates and displays the virtual camera sequence by importing data into the mirror tool so as to complete the configuration of the virtual camera. The virtual camera is not required to be manually adjusted by an animation producer, and the position and the motion trail of the virtual camera are determined according to the picture shot by the virtual camera, so that the process of configuring the virtual camera is simplified, the operation is simple, and the efficiency of configuring the virtual camera is improved.
In some embodiments, the method provided by the embodiment of the application further supports modification of the configuration parameters of the virtual camera.
In some embodiments, after step 330, step 350 is also included.
In step 350, the modified configuration parameters of the first virtual camera are displayed in response to the modification operation for the configuration parameters of the first virtual camera.
Illustratively, as shown in FIG. 4, the animator may perform a modification operation on the configuration parameters 403 of the first virtual camera displayed in the user interface 400, such as the configuration parameters of the first virtual camera being editable, and display the modified configuration parameters of the first virtual camera in response to the modification operation on the configuration parameters of the first virtual camera.
In some embodiments, a modification control may be displayed in the user interface, the configuration parameters of the first virtual camera are adjusted to an editable state in response to operation of the modification control, and the modified configuration parameters of the first virtual camera are displayed in response to modification operation of the configuration parameters of the first virtual camera. Illustratively, the modified start frame of the first virtual camera is displayed in response to a modification operation for the start frame of the first virtual camera. Illustratively, the modified coordinate position of the first virtual camera is displayed in response to a modification operation for the coordinate position of the first virtual camera in the virtual scene.
By the method, the configuration parameters of the virtual camera can be adjusted in a self-defined manner by the animation producer under the condition of unsatisfied virtual camera sequence generated automatically.
In some embodiments, at least one preset virtual camera is also displayed in the user interface, the preset virtual camera having at least one initial configuration parameter.
In some embodiments, the initial configuration parameters corresponding to different preset virtual cameras may be the same or different. For example, the initial configuration parameters corresponding to the preset distant view camera and the initial configuration parameters corresponding to the preset close view camera may be the same or different. For example, initial positions of the preset far-view camera and the preset near-view camera are all original point positions in the virtual scene. For example, the distance between the initial and the shooting object corresponding to the preset distant view camera is 500m, and the distance between the initial and the shooting object corresponding to the preset close view camera is 200m.
Illustratively, as shown in FIG. 4, a plurality of preset virtual cameras 406 are displayed in the user interface 400, including fixed camera-perspective, fixed camera-close-up, base mirror-push-pull, programmatically generating mirror-circular surrounds, and the like.
Illustratively, as shown in FIG. 4, different kinds of preset virtual cameras may be displayed in the user interface 400, such as fixed camera-telephoto cameras, fixed camera-close-range cameras, programmed mirror-circular surround cameras, basic mirror-push-pull cameras, and the like. Therefore, when the virtual camera is added, the animation producer only needs to select the corresponding preset virtual camera, and does not need to set the type of the virtual camera, so that the operation of adding the virtual camera is simplified.
In some embodiments, embodiments of the present application also support adding virtual cameras in a virtual camera sequence.
Illustratively, the step of adding a virtual camera may include at least one of the following steps 1-3.
And step 1, responding to the operation of a second virtual camera in at least one preset virtual camera, and displaying a configuration interface of the second virtual camera.
The configuration interface displays at least one mirror operation rule of the second virtual camera and corresponding configuration parameters, and the animation producer can set the configuration parameters of the second virtual camera in the configuration interface.
Illustratively, as shown in fig. 7, in response to an operation for a second virtual camera 710 of the at least one preset virtual camera, a configuration interface 720 for the second virtual camera is displayed. The configuration interface 720 displays at least one mirror rule of the second virtual camera and its corresponding configuration parameters.
Step 2, displaying at least one configuration parameter configured for the second virtual camera in a configuration interface of the second virtual camera, wherein the configured at least one configuration parameter comprises: the second virtual camera is added to the position in the virtual camera sequence.
In some embodiments, the at least one configuration parameter of the configuration includes a pointer time for determining a start timestamp of the shooting period of the second virtual camera.
If the configured at least one configuration parameter includes a pointer time, determining the shooting period of the second virtual camera and adding the second virtual camera when the pointer time is taken as a start time stamp of the shooting period of the second virtual camera and an end time stamp of the shooting period of the virtual camera where the pointer time is located is taken as an end time stamp of the shooting period of the second virtual camera. And modifying the ending time stamp of the shooting period of the virtual camera where the pointer time is positioned into the pointer time. For example, the virtual camera sequence includes the virtual camera 1 and the virtual camera 2, the photographing period of the virtual camera 1 is 1s to 10s, the photographing period of the virtual camera 2 is 11s to 20s, the pointer time is 14s, the virtual camera sequence is adjusted such that the photographing period of the virtual camera 1 is 1s to 10s, the photographing period of the virtual camera 2 is 11s to 20s, and the photographing period of the second virtual camera is 14s to 20s.
In some embodiments, the at least one configuration parameter of the configuration includes a start time stamp and an end time stamp, the start time stamp and the end time stamp being used to determine a capture period of the second virtual camera.
If the configured at least one configuration parameter comprises a start time stamp and an end time stamp, determining a shooting period of the second virtual camera according to the start time stamp and the end time stamp, and adding the second virtual camera. And sequentially and backwardly translating the virtual cameras with the original shooting time periods in the shooting time periods of the second virtual camera. For example, the virtual camera sequence includes the virtual camera 1 and the virtual camera 2, the photographing period of the virtual camera 1 is 1s to 10s, the photographing period of the virtual camera 2 is 11s to 20s, the start time stamp and the end time stamp are 11s and 15s, respectively, and then the virtual camera sequence is adjusted such that the photographing period of the virtual camera 1 is 1s to 10s, the photographing period of the second virtual camera is 11s to 15s, and the photographing period of the virtual camera 2 is 16s to 25s.
In some embodiments, the configured at least one configuration parameter may further include a photographic subject of the second virtual camera. Illustratively, as shown in fig. 7, a photographic subject selection control 721 is displayed in the configuration interface 720, selectable states of photographic subjects are displayed in response to an operation for the photographic subject selection control 721, and a photographic subject of the second virtual camera is displayed in the configuration interface 720 in response to a selection completion operation for the photographic subject.
In some embodiments, the at least one configuration parameter of the configuration may further include a name of the second virtual camera, a speed of operation, a spline proportion, a number of spline points, and the like.
And 3, in response to the configuration completion operation for the second virtual camera, adding and displaying the second virtual camera in the virtual camera sequence according to the adding position.
In some embodiments, the mirror tool superimposes the second virtual camera in the virtual camera sequence according to the configuration parameters of the second virtual camera and the addition location of the second virtual camera.
By the method, the operation of adding the virtual camera by the animation producer is simplified, the efficiency of configuring the virtual camera is improved, and the learning cost is reduced.
In some embodiments, a preview area of the virtual camera sequence is also displayed in the user interface.
The preview region of the virtual camera sequence may be used to preview images respectively captured by at least one virtual camera included in the virtual camera sequence.
Illustratively, as shown in FIG. 4, a preview area 407 is displayed in the user interface 400, and at least one virtual camera of the sequence of virtual cameras may be displayed in the preview area 407.
In some embodiments, at least one virtual camera in the preview area may be arranged in a time sequence, or may be arranged in a type of virtual camera, for example, in a tree structure according to the type of virtual camera. Illustratively, as shown in FIG. 4, the virtual cameras in preview area 407 are arranged in a tree structure, referred to as a virtual camera preview tree.
In some embodiments, the preview operation for the virtual camera sequence may include at least one of the following steps 1-2.
In step 1, in response to a preview operation for a virtual camera sequence, preview screens corresponding to the respective virtual cameras included in the virtual camera sequence are displayed in a preview area.
In some embodiments, a start preview control is displayed in the preview area, and in response to an operation for the start preview control, preview screens respectively corresponding to respective virtual cameras included in the virtual camera sequence are displayed in the preview area.
Illustratively, as shown in fig. 8, a start preview control 810 is displayed in the preview area 800, and in response to an operation for the start preview control 810, a preview screen 830 corresponding to each virtual camera included in the virtual camera sequence is displayed in the preview area.
In some embodiments, a rendering method with lower performance consumption is adopted to render the preview pictures corresponding to the virtual cameras respectively, and the resolution and the definition of the preview pictures corresponding to the virtual cameras respectively are reduced, so that the performance consumption of the game engine by the preview operation is reduced.
And 2, stopping the rendering flow of the preview picture in response to the ending preview operation for the virtual camera sequence, and deleting the rendered preview picture.
In some embodiments, an end preview control is displayed in the preview area, and in response to an operation for the end preview control, the rendering flow of the preview screen is stopped and the rendered preview screen is deleted.
Illustratively, as shown in fig. 8, an end preview control 820 is displayed in the preview area 810, and in response to an operation for the end preview control 820, the rendering flow of the preview screen is stopped and the rendered preview screen is deleted. That is, the preview area is restored to be displayed as the virtual camera preview tree 840 by displaying the preview screen 830 corresponding to each virtual camera included in the virtual camera sequence.
In some embodiments, the performance consumption is still substantial, although the preview function provided by the mirror tool is much less than the performance consumption of the preview function provided by the game engine, so the present application provides an end preview function. In response to ending the preview operation for the virtual camera sequence, stopping the rendering flow of the preview screen and deleting the rendered preview screen. In the case that previewing is not needed, the animator can choose to end the preview, stop the rendering process of the preview screen, and delete the rendered preview screen, so as to further reduce the performance consumption of the game engine.
In some embodiments, at least one historical virtual camera sequence is also displayed in the user interface, the historical virtual camera sequence being a virtual camera sequence that has completed configuration and saved.
Illustratively, as shown in FIG. 4, at least one historical virtual camera sequence 408 is also displayed in the user interface 400. Each historical virtual camera sequence includes at least one virtual camera therein.
In some embodiments, embodiments of the present application also provide modification functionality for historical virtual camera sequences.
In some embodiments, the modifying operation for the historical virtual camera sequence may include at least one of the following steps 1-3.
Step 1, in response to an operation for a third virtual camera in the historical virtual camera sequence, displaying configuration parameters of the third virtual camera.
Illustratively, as shown in FIG. 4, in response to operation for a third virtual camera in the historical virtual camera sequence, configuration parameters 409 for the third virtual camera are displayed.
The configuration parameters of the third virtual camera may include at least one of: start time stamp, end time stamp, frame rate of the third virtual camera.
And step 2, in response to the modification operation of the configuration parameters of the third virtual camera, displaying the modified configuration parameters of the third virtual camera.
And 3, responding to configuration preservation operation aiming at the historical virtual camera sequence, and preserving configuration information of the historical virtual camera sequence.
Illustratively, the history virtual camera sequence includes the history virtual camera 1, the history virtual camera 2, and the history virtual camera 3, and in response to the operation for the history virtual camera 1, the configuration parameters (e.g., start frame, end frame, frame rate) of the history virtual camera 1 are displayed. In response to the modification operation for the configuration parameters of the history virtual camera 1, the modified configuration parameters of the history virtual camera 1 are displayed. In response to a configuration save operation for the historical virtual camera sequence, configuration information for the historical virtual camera sequence is saved.
For step 2 and step 3, reference may be made to the modification operation and the configuration save operation in the above steps, and the present application is not described herein.
Through the method, the animation producer can modify the history virtual camera sequence which is completely created, the modification operation is simple and quick, the animation producer does not need to manually adjust the key frame of the virtual camera which needs to be modified, and the efficiency of configuring the virtual camera is improved.
In some embodiments, a first migration control and a second migration control are also displayed in the user interface.
The first migration control is for controlling migration of the virtual camera sequence in a first format and the second migration control is for controlling migration of the virtual camera sequence in a second format. Migration refers to copying the virtual camera sequence into other storage paths for creation and configuration of other projects. The engineering herein may be understood as engineering of creation and configuration of an animation sequence or virtual camera sequence.
The first format is a format that is recognized by the mirror tool and the second format is a format that is recognized by the game engine.
In some embodiments, there are cases where some game engines do not support the operation of the mirror tool, such as version disagreement, or where the mirror tool is not installed. In this case, if the virtual camera sequence configured by the mirror tool needs to be multiplexed, the game engine cannot recognize the first format, and therefore, the virtual camera sequence needs to be transferred from the first format to the second format and then to the game engine.
Illustratively, as shown in FIG. 4, a first migration control 410 and a second migration control 411 are displayed in the user interface 400, the first migration control 410 being for controlling migration of the virtual camera sequence in a first format and the second migration control 411 being for controlling migration of the virtual camera sequence in a second format.
In some embodiments, in response to operation of the first migration control, configuration information of the virtual camera sequence is saved into the target storage path in a first format, the first format being a format supported for recognition by the mirror tool.
In some embodiments, a path selection interface is displayed in response to an operation for a first migration control, and configuration information for a virtual camera sequence is saved in a first format into a target storage path in response to a selected operation for the target storage path.
In some embodiments, in response to operation of the second migration control, configuration information of the virtual camera sequence is saved into the target storage path in a second format, the second format being a format that is recognized by the game engine support.
Illustratively, the configuration information of the virtual camera sequence is saved into the target storage path in a CineCamera (movie camera) like camera format. The migrated virtual camera sequence is shown in fig. 9, where the short vertical lines represent key frames.
In some embodiments, when the configuration information of the virtual camera sequence is saved in the second format to the target storage path, each frame of the virtual camera sequence is determined to be a key frame, so as to ensure that the virtual camera can perform normal lens and shooting.
In some embodiments, the path selection interface is displayed in response to an operation for the second migration control, and the configuration information of the virtual camera sequence is saved in the second format into the target storage path in response to a selected operation for the target storage path.
The target storage path is a storage path of the virtual camera sequence, which is selected in a path selection interface. At least one stored path is displayed in the path selection interface.
By the method, the virtual camera sequence created and generated by adopting the mirror tool can be multiplexed into the game engine, and the limitation on the application of the virtual camera sequence is avoided.
In some embodiments, embodiments of the present application also provide a clipping function for a virtual camera.
In some embodiments, the switching operation for the historical virtual camera sequence may include at least one of the following steps 1-3.
Step 1, in response to a switching operation for a fourth virtual camera in the virtual camera sequence, determining a pointer time, wherein the pointer time is a time stamp for switching the virtual camera, and a shooting period of the fourth virtual camera is from a first time stamp to a second time stamp.
The time stamp in the embodiment of the present application refers to time of day and can be simply understood as a frame.
And step 2, if the pointer time is positioned at the first time stamp, switching a fourth virtual camera in the virtual camera sequence to a target virtual camera, wherein the shooting period of the target virtual camera is from the first time stamp to the second time stamp.
In some embodiments, as shown in fig. 10, if the pointer time is at the first time stamp 1010, the fourth virtual camera in the virtual camera sequence is switched to the target virtual camera, and the shooting period of the target virtual camera starts from the first time stamp 1010 to the second time stamp 1020.
For example, the first time stamp is 10s, the second time stamp is 20s, and if the pointer time is 10s, the fourth virtual camera in the virtual camera sequence is switched to the target virtual camera, and the shooting period of the target virtual camera starts from 10s to 20 s.
And 3, if the pointer time is between the first timestamp and the second timestamp, adding a target virtual camera in the virtual camera sequence, and adjusting the shooting period of the fourth virtual camera from the first timestamp to the pointer time, wherein the shooting period of the target virtual camera from the pointer time to the second timestamp.
In some embodiments, as shown in fig. 10, if the pointer time is between the first time stamp 1010 and the second time stamp 1020, a target virtual camera is added in the virtual camera sequence, and the shooting period of the fourth virtual camera is adjusted from the first time stamp 1010 to the pointer time end, and the shooting period of the target virtual camera is from the pointer time start to the second time stamp 1020 end. For example, the pointer time is located at the third time stamp 1030, and the photographing period of the fourth virtual camera is adjusted from the start of the first time stamp 1010 to the end of the third time stamp 1030, and the photographing period of the target virtual camera is adjusted from the start of the third time stamp 1030 to the end of the second time stamp 1020.
For example, if the first time stamp is 10s and the second time stamp is 20s, and the pointer time is 15s, the target virtual camera is added to the virtual camera sequence, and the shooting period of the fourth virtual camera is adjusted from 10s to 15s, and the shooting period of the target virtual camera is adjusted from 15s to 20 s.
In some embodiments, after the above-mentioned switching operation for the fourth virtual camera in the virtual camera sequence is completed, a preview screen captured by the virtual camera sequence is displayed. As shown in fig. 10, a preview screen 1040 captured by the virtual camera sequence is displayed.
By the method, animation producer can rapidly complete the switching operation of the virtual camera, and the configuration efficiency of the virtual camera is improved.
In some embodiments, an animation generation control is also displayed in the user interface.
Illustratively, as shown in FIG. 4, an animation generation control 412 is also displayed in the user interface 400.
The animation generation control is used for controlling generation of video obtained by shooting the virtual scene based on the virtual camera sequence.
In some embodiments, in response to an operation for the animation generation control, a video is generated that captures a virtual scene based on a virtual camera sequence.
The embodiment of the application also provides a mirror transporting tool, and the application provides a block diagram of a functional module of the mirror transporting tool aiming at the mirror transporting tool. The virtual camera is also referred to as a lens in the figure.
Referring to fig. 11, a block diagram of a common preset adding module of a mirror carrier according to an embodiment of the application is shown.
The public preset adding module is used as a main component module of the mirror tool and comprises preset descriptive contents (preset category, preset name and preset default parameter value), a lens generation rule and a lens movement rule, so that a user can perform man-machine interaction through a control displayed in a user interface to trigger configuration of the virtual camera.
The public preset adding module rewrites the coordinate position, focal length, shooting object, focal distance and other information of the virtual camera in the virtual scene by calling stored information of different preset objects (including the preset virtual camera) and combining the read skeleton information of the shooting object, and draws spline lines of different motion tracks in real time to complete the generation and instantiation of the virtual camera.
Referring to fig. 12, a block diagram of a lens and sequence creation module of a mirror carrier according to an embodiment of the application is shown.
After the virtual camera generation instantiation is completed, each virtual camera exists in the virtual scene, can directly display and call, and can also be used for outputting real-time signals such as real-time rendering. If a real-time rendering sequence is desired or a virtual camera sequence is involved in advance, an animation sequence needs to be created. The animation sequence includes a virtual camera sequence, an audio track, an animation of a photographed object, and the like.
The user can input the initial frame, the end frame and the frame rate generated by the animation sequence or fill in the normalized shot script, and click the create sequence button to trigger the generation of the animation sequence function. The function creates an animation sequence, creates a virtual camera array for the selected virtual camera, and creates necessary motion tracks and key frames for each virtual camera in the created animation sequence by traversing the virtual camera array and judging configuration parameters of each virtual camera, so that the motions of the virtual cameras can be reproduced in the animation sequence.
The mirror tool simultaneously provides another user interface, so that a user can modify the configuration parameters of the virtual camera in real time through the user interface, and the motion rules of the virtual camera and the motion rules in the animation sequence can be rewritten in real time.
The mirror tool provides a conversion function at the same time, so that the virtual camera sequence based on the mirror tool can be converted into the virtual camera sequence supported by the game engine, and the animation sequence based on the mirror tool can be converted into a common animation sequence, and the operation of the mirror tool is not depended.
Referring to fig. 13, a block diagram of a quick editing and previewing module of a mirror carrier according to an embodiment of the present application is shown.
The switching logic of the virtual camera with the animation sequence is complex and can not preview the shot picture of each camera at the current time point, so that the mirror-transporting tool simultaneously provides a user interface capable of simulating the signal switching table, and a user can rapidly finish the functions of editing the virtual camera and previewing the virtual camera.
Clicking the start preview control, displaying real-time images of all virtual cameras in the virtual camera sequence in the user interface, selecting a certain virtual camera, and binding the current virtual camera track by the virtual camera switching sequence track, wherein the range is from the current frame to the end frame, and each section is switched into a virtual camera chapter (shooting period of one virtual camera).
When the virtual camera sequence is quickly clipped, the mirror tool will first determine whether the current frame is within a certain completed virtual camera chapter. If the current frame is the initial frame of a certain virtual camera chapter, selecting a target virtual camera, converting the virtual camera binding of the virtual camera chapter into a target virtual camera binding picture, if the current frame is positioned in the middle of the certain virtual camera chapter, segmenting the virtual camera chapter, wherein the front part of the current frame is the original virtual camera binding picture, and the rear part of the current frame is the target virtual camera binding picture.
The technical scheme provided by the application can effectively improve the configuration efficiency of the virtual camera, support quick creation, adjustment and repeated modification, reduce the learning cost of animation production by using a game engine for users, is friendly to zero-base users, and can perform cross-engineering multiplexing.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Referring to fig. 14, a block diagram of a configuration apparatus of a virtual camera according to an embodiment of the present application is shown. The device has the function of realizing the method example, and the function can be realized by hardware or can be realized by executing corresponding software by hardware. The device may be the terminal device described above, or may be provided in the terminal device. As shown in fig. 14, the apparatus 1400 includes: a first display module 1410, a second display module 1420, a third display module 1430, and a save module 1440.
The first display module 1410 is configured to display a user interface of the mirror tool, where an import control is displayed in the user interface.
A second display module 1420, configured to display, in response to an operation on the import control, a virtual camera sequence generated according to import data, where the virtual camera sequence includes at least one virtual camera, and the import data is used to define a category and an order of each of the virtual cameras.
A third display module 1430 for displaying configuration parameters of a first virtual camera of the at least one virtual camera in response to an operation for the first virtual camera, the configuration parameters being used to determine a mirror rule for the virtual camera in a virtual scene, and the configuration parameters being automatically generated by the mirror tool in accordance with the import data.
A saving module 1440, configured to save configuration information of the virtual camera sequence in response to a configuration save operation for the virtual camera sequence, where the configuration information includes configuration parameters of each virtual camera included in the virtual camera sequence.
In some embodiments, the second display module 1420 is configured to obtain the import data in response to an operation on the import control; for the virtual camera defined by the imported data, determining configuration parameters of the virtual camera according to the type of the virtual camera and the position information of a shooting object; and generating and displaying the virtual camera sequence according to the sequence of each virtual camera defined by the imported data and the configuration parameters of each virtual camera.
In some embodiments, the second display module 1420 is configured to determine, if the type of the virtual camera is a fixed lens, a configuration parameter of the virtual camera according to the location information of the shooting object, where the configuration parameter of the virtual camera includes a location of the virtual camera; or alternatively, the process may be performed,
and if the type of the virtual camera is a dynamic lens, determining configuration parameters of the virtual camera according to the position information of the shooting object, wherein the configuration parameters of the virtual camera comprise the position and the motion trail of the virtual camera.
In some embodiments, as shown in fig. 15, the apparatus 1400 further comprises a fourth display module 1450.
A fourth display module 1450 for displaying the modified configuration parameters of the first virtual camera in response to the modification operation for the configuration parameters of the first virtual camera.
In some embodiments, at least one preset virtual camera is also displayed in the user interface, the preset virtual camera having at least one initial configuration parameter;
the fourth display module 1450 is further configured to display a configuration interface of a second virtual camera among the at least one preset virtual camera in response to an operation for the second virtual camera; displaying, in a configuration interface of the second virtual camera, at least one configuration parameter configured for the second virtual camera, the configured at least one configuration parameter comprising: an addition position of the second virtual camera in the virtual camera sequence; and in response to completing the operation for the configuration of the second virtual camera, adding and displaying the second virtual camera in the virtual camera sequence according to the adding position.
In some embodiments, a preview area of the virtual camera sequence is also displayed in the user interface;
the fourth display module 1450 is further configured to display preview screens corresponding to respective virtual cameras included in the virtual camera sequence in the preview area in response to a preview operation for the virtual camera sequence.
In some embodiments, the fourth display module 1450 is further configured to stop the rendering process of the preview screen and delete the rendered preview screen in response to the ending preview operation for the virtual camera sequence.
In some embodiments, at least one historical virtual camera sequence is also displayed in the user interface, wherein the historical virtual camera sequence refers to a virtual camera sequence which is configured and saved;
the fourth display module 1450 is further configured to display configuration parameters for a third virtual camera in the historical virtual camera sequence in response to an operation for the third virtual camera; displaying the modified configuration parameters of the third virtual camera in response to a modification operation for the configuration parameters of the third virtual camera; and saving configuration information of the historical virtual camera sequence in response to a configuration save operation for the historical virtual camera sequence.
In some embodiments, a first migration control and a second migration control are also displayed in the user interface;
the fourth display module 1450 is further configured to save configuration information of the virtual camera sequence in a first format into a target storage path in response to an operation for the first migration control, the first format being a format supported for identification by the mirror tool; or alternatively, the process may be performed,
the fourth display module 1450 is further configured to save the configuration information of the virtual camera sequence in a second format into the target storage path in response to the operation for the second migration control, the second format being a format that is recognized by the game engine.
In some embodiments, the fourth display module 1450 is further configured to determine, in response to a switching operation for a fourth virtual camera in the sequence of virtual cameras, a pointer time, the pointer time being a time stamp at which the virtual camera needs to be switched, a capturing period of the fourth virtual camera starting from the first time stamp to the second time stamp; if the pointer time is located in the first time stamp, switching the fourth virtual camera in the virtual camera sequence to a target virtual camera, wherein the shooting period of the target virtual camera is from the first time stamp to the second time stamp; and if the pointer time is between the first time stamp and the second time stamp, adding a target virtual camera in the virtual camera sequence, wherein the shooting period of the fourth virtual camera is adjusted to be from the first time stamp to the pointer time, and the shooting period of the target virtual camera is from the pointer time to the second time stamp.
In some embodiments, an animation generation control is also displayed in the user interface, and the apparatus 1400 further comprises a generation module 1460.
And a generating module 1460, configured to generate, in response to an operation for the animation generation control, a video obtained by capturing the virtual scene based on the virtual camera sequence.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be implemented by different functional modules according to actual needs, that is, the content structure of the device is divided into different functional modules, so as to implement all or part of the functions described above.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Referring to fig. 16, a block diagram of a terminal device according to another exemplary embodiment of the present application is shown.
In general, the computer device 1600 includes: a processor 1601, and a memory 1602.
Processor 1601 may include one or more processing cores, such as a 4-core processor, a 9-core processor, and the like. The processor 1601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing ), an FPGA (Field Programmable Gate Array, field programmable gate array), a PLA (Programmable Logic Array ). The processor 1601 may also include a host processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1601 may be integrated with a GPU (Graphics Processing Unit, image processor) for use in responsible for rendering and rendering of content to be displayed by the display screen. In some embodiments, the processor 1601 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1602 may include one or more computer-readable storage media, which may be tangible and non-transitory. Memory 1602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1602 stores a computer program that is loaded and executed by processor 1601 to implement the virtual camera configuration method described above.
Those skilled in the art will appreciate that the architecture shown in fig. 16 is not limiting as to the computer device 1600, and may include more or fewer components than shown, or may combine certain components, or employ a different arrangement of components.
In an exemplary embodiment, a computer readable storage medium is also provided, in which a computer program is stored, which computer program, when being executed by a processor of a computer device, implements the above-mentioned method of configuring a virtual camera.
Alternatively, the computer-readable storage medium may include: ROM (Read-Only Memory), RAM (Random-Access Memory), SSD (Solid State Drives, solid State disk), optical disk, or the like. The random access memory may include ReRAM (Resistance Random Access Memory, resistive random access memory) and DRAM (Dynamic Random Access Memory ), among others.
In an exemplary embodiment, a computer program product is also provided, the computer program product comprising a computer program stored in a computer readable storage medium. The processor of the terminal device reads the computer program from the computer-readable storage medium, and the processor executes the computer program so that the terminal device executes the above-described configuration method of the virtual camera.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. In the embodiment of the application, the relevant data collection and processing should be strictly according to the requirements of relevant national laws and regulations when the example is applied, the informed consent or independent consent (or legal basis) of the personal information body is obtained, and the subsequent data use and processing behaviors are developed within the authorized range of the laws and regulations and the personal information body.
The foregoing description of the exemplary embodiments of the application is not intended to limit the application to the particular embodiments disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the application.

Claims (15)

1. A method of configuring a virtual camera, the method comprising:
displaying a user interface of the mirror transport tool, wherein an import control is displayed in the user interface;
in response to an operation for the import control, displaying a virtual camera sequence generated according to import data, wherein the virtual camera sequence comprises at least one virtual camera, and the import data is used for defining the type and the sequence of each virtual camera;
in response to an operation for a first virtual camera of the at least one virtual camera, displaying configuration parameters of the first virtual camera, the configuration parameters being used to determine a mirror rule of the virtual camera in a virtual scene, and the configuration parameters being automatically generated by the mirror tool from the imported data;
and storing configuration information of the virtual camera sequence in response to a configuration storage operation for the virtual camera sequence, wherein the configuration information comprises configuration parameters of each virtual camera contained in the virtual camera sequence.
2. The method of claim 1, wherein the displaying, in response to an operation on the import control, a virtual camera sequence generated from import data comprises:
Acquiring the import data in response to the operation of the import control;
for the virtual camera defined by the imported data, determining configuration parameters of the virtual camera according to the type of the virtual camera and the position information of a shooting object;
and generating and displaying the virtual camera sequence according to the sequence of each virtual camera defined by the imported data and the configuration parameters of each virtual camera.
3. The method according to claim 2, wherein the determining the configuration parameters of the virtual camera according to the type of the virtual camera and the location information of the photographed object includes:
if the type of the virtual camera is a fixed lens, determining configuration parameters of the virtual camera according to the position information of the shooting object, wherein the configuration parameters of the virtual camera comprise the position of the virtual camera;
or alternatively, the process may be performed,
and if the type of the virtual camera is a dynamic lens, determining configuration parameters of the virtual camera according to the position information of the shooting object, wherein the configuration parameters of the virtual camera comprise the position and the motion trail of the virtual camera.
4. The method of claim 1, wherein after displaying the configuration parameters of the first virtual camera, further comprising:
and displaying the modified configuration parameters of the first virtual camera in response to a modification operation for the configuration parameters of the first virtual camera.
5. The method of claim 1, wherein at least one preset virtual camera is also displayed in the user interface, the preset virtual camera having at least one initial configuration parameter;
the method further comprises the steps of:
responsive to an operation for a second virtual camera of the at least one preset virtual camera, displaying a configuration interface of the second virtual camera;
displaying, in a configuration interface of the second virtual camera, at least one configuration parameter configured for the second virtual camera, the configured at least one configuration parameter comprising: an addition position of the second virtual camera in the virtual camera sequence;
and in response to completing the operation for the configuration of the second virtual camera, adding and displaying the second virtual camera in the virtual camera sequence according to the adding position.
6. The method of claim 1, wherein a preview area of the virtual camera sequence is also displayed in the user interface;
the method further comprises the steps of:
and in response to a preview operation for the virtual camera sequence, displaying preview pictures respectively corresponding to the virtual cameras contained in the virtual camera sequence in the preview area.
7. The method of claim 6, wherein the method further comprises:
and stopping the rendering flow of the preview picture and deleting the rendered preview picture in response to the ending preview operation for the virtual camera sequence.
8. The method of claim 1, wherein at least one historical virtual camera sequence is also displayed in the user interface, the historical virtual camera sequence being a virtual camera sequence that has been configured and saved;
the method further comprises the steps of:
responsive to an operation for a third virtual camera in the historical virtual camera sequence, displaying configuration parameters for the third virtual camera;
displaying the modified configuration parameters of the third virtual camera in response to a modification operation for the configuration parameters of the third virtual camera;
And saving configuration information of the historical virtual camera sequence in response to a configuration save operation for the historical virtual camera sequence.
9. The method of claim 1, wherein a first migration control and a second migration control are also displayed in the user interface;
the method further comprises the steps of:
in response to an operation for the first migration control, saving configuration information of the virtual camera sequence into a target storage path in a first format, wherein the first format is a format supported by the mirror tool to be identified;
or alternatively, the process may be performed,
and in response to the operation on the second migration control, saving configuration information of the virtual camera sequence into a target storage path in a second format, wherein the second format is a format supported and identified by a game engine.
10. The method according to claim 1, wherein the method further comprises:
determining a pointer time in response to a switching operation for a fourth virtual camera in the sequence of virtual cameras, the pointer time being a time stamp at which the virtual camera needs to be switched, a shooting period of the fourth virtual camera starting from a first time stamp to a second time stamp ending;
If the pointer time is located in the first time stamp, switching the fourth virtual camera in the virtual camera sequence to a target virtual camera, wherein the shooting period of the target virtual camera is from the first time stamp to the second time stamp;
and if the pointer time is between the first time stamp and the second time stamp, adding a target virtual camera in the virtual camera sequence, wherein the shooting period of the fourth virtual camera is adjusted to be from the first time stamp to the pointer time, and the shooting period of the target virtual camera is from the pointer time to the second time stamp.
11. The method of claim 1, wherein an animation generation control is also displayed in the user interface, the method further comprising:
and generating a video obtained by shooting the virtual scene based on the virtual camera sequence in response to the operation of the animation generation control.
12. A virtual camera configuration apparatus, the apparatus comprising:
the first display module is used for displaying a user interface of the mirror tool, and an import control is displayed in the user interface;
A second display module for displaying a virtual camera sequence generated according to import data in response to an operation for the import control, the virtual camera sequence including at least one virtual camera, the import data defining a kind and an order of each of the virtual cameras;
a third display module for displaying configuration parameters of a first virtual camera of the at least one virtual camera in response to an operation for the first virtual camera, the configuration parameters being used to determine a mirror rule of the virtual camera in a virtual scene, and the configuration parameters being automatically generated by the mirror tool according to the import data;
and the storage module is used for responding to the configuration storage operation of the virtual camera sequence and storing the configuration information of the virtual camera sequence, wherein the configuration information comprises the configuration parameters of each virtual camera contained in the virtual camera sequence.
13. A terminal device, characterized in that it comprises a processor and a memory, in which a computer program is stored, the processor being adapted to execute the computer program to implement the method according to any of claims 1 to 11.
14. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program, which is loaded and executed by a processor to implement the method of any of claims 1 to 11.
15. A computer program product, characterized in that the computer program product comprises a computer program that is loaded and executed by a processor to implement the method of any one of claims 1 to 11.
CN202310587882.5A 2023-05-23 2023-05-23 Virtual camera configuration method, device, equipment and storage medium Pending CN116774902A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310587882.5A CN116774902A (en) 2023-05-23 2023-05-23 Virtual camera configuration method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310587882.5A CN116774902A (en) 2023-05-23 2023-05-23 Virtual camera configuration method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116774902A true CN116774902A (en) 2023-09-19

Family

ID=87988690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310587882.5A Pending CN116774902A (en) 2023-05-23 2023-05-23 Virtual camera configuration method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116774902A (en)

Similar Documents

Publication Publication Date Title
CN111701238B (en) Virtual picture volume display method, device, equipment and storage medium
CN110636365B (en) Video character adding method and device, electronic equipment and storage medium
US11443450B2 (en) Analyzing screen coverage of a target object
US11513658B1 (en) Custom query of a media universe database
US20180143741A1 (en) Intelligent graphical feature generation for user content
CN110996150A (en) Video fusion method, electronic device and storage medium
CN110572717A (en) Video editing method and device
CN111179391A (en) Three-dimensional animation production method, system and storage medium
CN113660528A (en) Video synthesis method and device, electronic equipment and storage medium
CN114531553B (en) Method, device, electronic equipment and storage medium for generating special effect video
CN117395386A (en) Interactive shooting method, device, equipment and medium of virtual delay video
US11720233B2 (en) Method for associating production elements with a production approach
WO2024007290A1 (en) Video acquisition method, electronic device, storage medium, and program product
CN116774902A (en) Virtual camera configuration method, device, equipment and storage medium
CN115167940A (en) 3D file loading method and device
Thorne et al. Firebolt: A system for automated low-level cinematic narrative realization
CN116302296B (en) Resource preview method, device, equipment and storage medium
Horváthová et al. Using blender 3D for learning virtual and mixed reality
US20240029381A1 (en) Editing mixed-reality recordings
CN115170707B (en) 3D image implementation system and method based on application program framework
KR102533209B1 (en) Method and system for creating dynamic extended reality content
TWM560053U (en) Editing device for integrating augmented reality online
CN117788647A (en) Method, apparatus and computer readable medium for producing track animation
Hariadi et al. Profiling Director's Style Based on Camera Positioning Using Fuzzy Logic.
CN115858077A (en) Method, apparatus, device and medium for creating special effects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40094515

Country of ref document: HK