WO2024051487A1 - 虚拟相机的参数处理方法、装置、设备、存储介质及程序产品 - Google Patents

虚拟相机的参数处理方法、装置、设备、存储介质及程序产品 Download PDF

Info

Publication number
WO2024051487A1
WO2024051487A1 PCT/CN2023/114226 CN2023114226W WO2024051487A1 WO 2024051487 A1 WO2024051487 A1 WO 2024051487A1 CN 2023114226 W CN2023114226 W CN 2023114226W WO 2024051487 A1 WO2024051487 A1 WO 2024051487A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
virtual
virtual camera
target
parameters
Prior art date
Application number
PCT/CN2023/114226
Other languages
English (en)
French (fr)
Inventor
陈法圣
曾亮
孙磊
林之阳
周惠芝
陈湘广
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2024051487A1 publication Critical patent/WO2024051487A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the present application relates to the field of computer technology, and in particular to a virtual camera parameter processing method, device, electronic equipment, computer-readable storage medium and computer program product.
  • Virtual-real fusion technology is a technology that cleverly integrates virtual scenes and real scenes. It is widely used in many technical fields such as multimedia video production, three-dimensional modeling, online meetings, real-time registration, intelligent interaction, and sensing. Virtual-real fusion mainly reflects In terms of virtual and real combination, real-time interaction and three-dimensional interactive matching, it is mainly realized through display technology, interaction technology, sensing technology and computer imaging and image technology.
  • camera anti-shake is usually achieved through hardware stabilizers (for example, camera pan/tilts).
  • hardware stabilizers for example, camera pan/tilts.
  • the cost of hardware stabilizers is extremely high. Therefore, the cost of anti-shake hardware in related technologies is extremely high. High, and poor stability.
  • Embodiments of the present application provide a virtual camera parameter processing method, device, electronic equipment, computer-readable storage medium, and computer program product, which can effectively improve the stable performance of the virtual camera in a virtual fusion scene.
  • Embodiments of the present application provide a virtual camera parameter processing method, including:
  • the first virtual camera has a binding relationship with the physical camera in the real scene.
  • the physical camera is used to collect image data of the object in the real scene to obtain the Image data of the object;
  • the target camera parameters Based on the target camera parameters, adjust the camera parameters of the second virtual camera in the virtual scene to obtain an adjusted second virtual camera.
  • the focus of the second virtual camera corresponds to the focus of the first virtual camera.
  • the adjusted second virtual camera is used to render an image of a virtual scene including the object based on the image data.
  • An embodiment of the present application provides a parameter processing device for a virtual camera, including:
  • the acquisition module is configured to acquire the camera parameters of the first virtual camera in the virtual scene.
  • the first virtual camera has a binding relationship with the physical camera in the real scene.
  • the physical camera is used to capture objects in the real scene. Carry out image data collection to obtain image data of the object;
  • a smoothing module configured to smooth the camera parameters of the first virtual camera to obtain target camera parameters
  • the adjustment module is configured to adjust the camera parameters of the second virtual camera in the virtual scene based on the target camera parameters to obtain an adjusted second virtual camera.
  • the focus of the second virtual camera is consistent with the first virtual camera. corresponds to the focus of the camera;
  • the adjusted second virtual camera is used to render an image of a virtual scene including the object based on the image data.
  • An embodiment of the present application provides an electronic device, including:
  • Memory configured to store computer-executable instructions or computer programs
  • the processor is configured to implement the virtual camera parameter processing method provided by the embodiment of the present application when executing computer-executable instructions or computer programs stored in the memory.
  • Embodiments of the present application provide a computer-readable storage medium that stores computer-executable instructions for causing a processor to implement the virtual camera parameter processing method provided by embodiments of the present application when executed.
  • Embodiments of the present application provide a computer program product.
  • the computer program product includes a computer program or computer-executable instructions.
  • the computer program or computer-executable instructions are stored in a computer-readable storage medium.
  • the processor of the electronic device reads the computer-executable instructions from the computer-readable storage medium, and the processor executes the computer-executable instructions, so that the electronic device executes the virtual camera parameter processing method described in the embodiment of the present application.
  • the first virtual camera Since the first virtual camera has a binding relationship with the physical camera in the real scene, the first virtual camera and the physical camera have the same camera parameters, so that the smoothing of the camera parameters of the first virtual camera is equivalent to the smoothing of the physical camera.
  • the camera parameters are smoothed. Since the focus of the first virtual camera corresponds to the focus of the second virtual camera, the camera parameters of the second virtual camera are adjusted based on the target camera parameters obtained by smoothing, so that the smooth The target camera parameters obtained by processing are transferred to the second virtual camera. In this way, the physical camera in the real scene does not need the assistance of a hardware stabilizer. Even if the physical camera shakes, the camera parameters of the second virtual camera can remain stable, effectively improving It improves the stable performance of the virtual camera and saves the hardware cost of installing a hardware stabilizer on the physical camera, thus significantly reducing the hardware cost.
  • FIGS 1A to 1B are schematic diagrams of the effects of related technologies
  • Figure 2 is a schematic structural diagram of the parameter processing system architecture of a virtual camera provided by an embodiment of the present application
  • Figure 3 is a schematic structural diagram of an electronic device 500 provided by an embodiment of the present application.
  • Figure 4 is a schematic flowchart of a virtual camera parameter processing method provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of the principle of a virtual camera parameter processing method provided by an embodiment of the present application.
  • FIGS. 6 to 7 are flowcharts of a virtual camera parameter processing method provided by an embodiment of the present application.
  • Figure 8 is a schematic diagram of the principle of a virtual camera parameter processing method provided by an embodiment of the present application.
  • FIGS. 9 to 11 are flowcharts of a virtual camera parameter processing method provided by embodiments of the present application.
  • FIGS 12 to 14 are schematic diagrams of the principles of the virtual camera parameter processing method provided by embodiments of the present application.
  • first ⁇ second ⁇ third are only used to distinguish similar objects and do not represent a specific ordering of objects. It is understandable that "first ⁇ second ⁇ third" Where permitted, the specific order or sequence may be interchanged so that the embodiments of the application described herein can be practiced in an order other than that illustrated or described herein.
  • Virtual scene It is a scene displayed (or provided) when the application is running on the electronic device.
  • Virtual scenes can It is a simulation environment of the real world, it can also be a semi-simulation and semi-fictitious virtual environment, or it can be a purely fictitious virtual environment.
  • the virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene.
  • the embodiments of this application do not limit the dimensions of the virtual scene.
  • the virtual scene can include the sky, land, ocean, etc.
  • the land can include environmental elements such as deserts and cities, and the user can control virtual objects to move in the virtual scene.
  • Virtual objects images of various people and objects that can interact in the virtual scene, or movable objects in the virtual scene.
  • the movable object may be a virtual character, a virtual animal, an animation character, etc., such as a character, animal, etc. displayed in a virtual scene.
  • the virtual object may be a virtual avatar representing the user in the virtual scene.
  • the virtual scene may include multiple virtual objects. Each virtual object has its own shape and volume in the virtual scene and occupies a part of the space in the virtual scene.
  • Virtual-real fusion technology is a technology that cleverly integrates virtual scenes and real scenes. It is widely used in many technical fields such as multimedia video production, three-dimensional modeling, online meetings, real-time registration, intelligent interaction, and sensing. , The fusion of virtual and real is mainly reflected in the combination of virtual and real, real-time interaction and three-dimensional interactive matching, etc., and is mainly realized through display technology, interaction technology, sensing technology and computer image and image technology.
  • Virtual camera It is a "camera” set up in computer animation software or virtual engine, used for virtual shooting.
  • the virtual shooting is shot through a computer-generated virtual scene, by setting the position, angle and other camera parameters of the virtual camera. , so that the virtual camera can simulate the camera operation in real shooting.
  • the role of the virtual camera in expressing the viewpoint during animation is equivalent to the camera in the traditional sense.
  • the objects captured by the virtual camera are completely different from those captured by the physical camera.
  • the objects captured by the physical camera are completely different. They are real people or actually built scenes. What the virtual camera shoots is a model built in 3D software.
  • the virtual camera is presented in the form of an icon in the virtual engine. It also has parameters such as lens, focal length, focus, aperture, and depth of field.
  • the camera parameters of the virtual camera are buttons or numerical input fields integrated on the panel. The operator only needs to input parameters or Drag the mouse to configure the virtual camera parameters;
  • the virtual camera and the bound physical camera have the same camera parameters, which can be sent to the game engine (virtual engine) through the physical camera.
  • the game engine adds a virtual camera paired with the physical camera in the virtual scene.
  • the virtual camera sends the camera parameters of the physical camera to the virtual camera in real time.
  • the virtual camera receives the camera parameters of the physical camera, it The received camera parameters of the physical camera are synchronized in real time to ensure that the first virtual camera and the bound physical camera have the same camera parameters. That is, the virtual camera can be regarded as the number of the physical camera in the real scene bound to it. twins.
  • Virtual engine refers to the core components of some edited computer virtual systems or some interactive real-time image applications that have been written. These systems provide designers of virtual scenes with various tools needed to write virtual scenes. Its purpose is to allow designers to write programs easily and quickly.
  • the virtual engine includes a rendering engine (a rendering engine includes a two-dimensional rendering engine and a three-dimensional rendering engine), a physics engine, a collision detection engine, a sound effects engine, a script engine, an animation engine, Artificial intelligence engine, network engine and scene management engine, etc.
  • the camera parameters include at least one of attitude angle parameters, field of view angle parameters and position parameters.
  • the attitude angle parameters are defined according to the Euler concept, so they are also called Euler angles.
  • the attitude angle includes roll. Rotation angle, pitch angle and heading angle. Different rotation sequences will form different coordinate transformation matrices.
  • the spatial rotation of the camera coordinate system relative to the object coordinate system is expressed in the order of heading angle, pitch angle and roll angle.
  • the numerical value of the field of view parameter is positively related to the field of view range of the virtual camera, and the position parameter represents the three-dimensional position coordinates of the camera.
  • Metaverse It uses technological means to link and create, mapping the real world into an interactive virtual world, and a digital living space with a new social system.
  • the Metaverse is a new Internet application and social form that integrates virtual and real technologies and is created by integrating multiple new technologies. It provides an immersive experience based on extended reality technology, and uses digital twin technology to generate a mirror of the real world. It builds an economic system through blockchain technology. , closely integrating the virtual world and the real world on social systems, identity systems and other systems, and allowing each user to produce and edit content.
  • FIGS. 1A and 1B are schematic diagrams of the effects of related technologies.
  • a hardware stabilizer for example, the hardware stabilizer 40 shown in FIG. 1A , the hardware shown in FIG. 1B .
  • Stabilizer 41 realizes the anti-shake of the camera.
  • the hardware stabilizer can be installed with additional camera accessories to realize functions such as automatic tracking of objects.
  • the cost of the hardware stabilizer will increase with the increase of the camera cost. Therefore, Anti-shake is extremely expensive.
  • the parameter processing method of the virtual camera provided by the embodiment of the present application can obtain the same high-quality stabilization effect as the hardware stabilizer, with extremely low complexity, runs in real time and does not occupy computing resources. And there is no need for complicated installation and debugging operations. You only need to connect the camera screen and you can use it immediately. There is no need for installation and debugging. It is extremely easy to use and has a very low threshold. It can completely replace the physical stabilizer, thereby saving costs.
  • Embodiments of the present application provide a virtual camera parameter processing method, device, electronic equipment, computer-readable storage media and computer program products, which can significantly reduce hardware costs while effectively improving the stability of the virtual camera and effectively improving the virtual camera performance.
  • the following describes an exemplary application of the virtual camera parameter processing system provided by the embodiment of the present application.
  • FIG 2 is a schematic architectural diagram of a virtual camera parameter processing system 100 provided by an embodiment of the present application.
  • a terminal (terminal 400 is shown as an example) is connected to the server 200 through a network 300.
  • the network 300 can be a wide area network or a local area network. Or a combination of both.
  • a client 410 may be provided on the terminal 400 and configured to display the image of the virtual scene on the graphical interface 410-1 (the graphical interface 410-1 is illustrated as an example).
  • the graphical interface 410-1 is illustrated as an example.
  • an image of a virtual scene in an online game application (Application, APP) is displayed in the graphical interface 410-1.
  • the terminal 400 runs the network conferencing application APP and displays the image of the virtual scene in the graphical interface 410-1.
  • the terminal 400 runs the video APP and displays the image of the virtual scene in the graphical interface 410-1.
  • the terminal 400 and the server 200 are connected to each other through a wired or wireless network.
  • the server 200 may be an independent physical server, a server cluster or a distributed system composed of multiple physical servers, or may provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, Cloud servers for basic cloud computing services such as network services, cloud communications, middleware services, domain name services, security services, Content Delivery Network (CDN), and big data and artificial intelligence platforms.
  • the terminal 400 can be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart TV, a smart watch, a vehicle-mounted terminal, etc., but is not limited thereto.
  • the electronic device provided by the embodiment of the present application can be implemented as a terminal or as a server.
  • the terminal and the server can be connected directly or indirectly through wired or wireless communication methods, which are not limited in the embodiments of this application.
  • the virtual camera parameter processing method provided by the embodiments of the present application can also be applied to the display scene of images of virtual scenes related to reality augmentation.
  • the terminal 400 (such as a virtual wearable terminal) is used to display an image of a virtual scene.
  • the terminal 400 displays an icon of a virtual camera that can be selected by the user in the graphical interface 410-1.
  • the icon selection operation determines the selected virtual camera; the terminal 400 sends the selected virtual camera to the server 200 through the network 300, and the server 200 smoothes the camera parameters of the selected virtual camera to obtain the target camera parameters, and configures The virtual camera with target camera parameters renders an image of the virtual scene.
  • the server 200 sends the rendered image of the virtual scene to the terminal 400.
  • the terminal 400 displays the rendered image of the virtual scene in the graphical interface 410-1.
  • the terminal 400 displays an icon of a virtual camera that can be selected by the user in the graphical interface 410-1, and obtains the selected virtual camera in response to the user's selection operation on the icon displayed in the graphical interface 410-1.
  • the camera parameters of the selected virtual camera are smoothed to obtain the target camera parameters, and the image of the virtual scene is rendered through the virtual camera configured with the target camera parameters, and the rendered image is displayed in the graphical interface 410-1 images of virtual scenes.
  • the terminal 400 displays an icon of a virtual camera for the user to select in the graphical interface 410-1
  • the terminal 400 sends the camera parameters of the selected virtual camera to the server 200 through the network 300, and the server 200 receives the camera parameters of the selected virtual camera, and
  • the smoothed target camera parameters are sent to the terminal 400.
  • the terminal 400 receives the smoothed target camera parameters and renders an image of the virtual scene through a virtual camera configured with the target camera parameters.
  • the terminal 400 displays the image in the graphical interface 410-1. Images of virtual scenes.
  • Cloud technology refers to the unification of a series of resources such as hardware, software, and networks within a wide area network or a local area network to realize data calculation, storage, and A hosting technology for processing and sharing.
  • FIG. 3 is a schematic structural diagram of an electronic device 500 for performing virtual camera parameter processing provided by an embodiment of the present application.
  • the electronic device 500 shown in Figure 3 can be the server 200 or the terminal in Figure 2 400.
  • the electronic device 500 shown in FIG. 3 includes: at least one processor 410, a memory 450, and at least one network interface 420.
  • the various components in electronic device 500 are coupled together by bus system 440 .
  • the bus system 440 is used to implement connection communication between these components.
  • the bus system 440 also includes a power bus, a control bus, and a status signal bus.
  • the various buses are labeled as bus system 440 in FIG. 3 .
  • the processor 410 may be an integrated circuit chip with signal processing capabilities, such as a general-purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gate or transistor logic devices, or discrete hardware Components, etc., wherein the general processor can be a microprocessor or any conventional processor, etc.
  • DSP Digital Signal Processor
  • Memory 450 may be removable, non-removable, or a combination thereof.
  • Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, etc.
  • Memory 450 optionally includes one or more storage devices physically located remotely from processor 410 .
  • Memory 450 includes volatile memory or non-volatile memory, and may include both volatile and non-volatile memory.
  • Non-volatile memory can be read-only memory (ROM, Read Only Memory), and volatile memory can be random access memory (RAM, Random Access Memory).
  • RAM Random Access Memory
  • the memory 450 described in the embodiments of this application is intended to include any suitable type of memory.
  • the memory 450 is capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplarily described below.
  • the operating system 451 includes system programs used to process various basic system services and perform hardware-related tasks, such as the framework layer, core library layer, driver layer, etc., which are used to implement various basic services and process hardware-based tasks;
  • Network communication module 452 used to reach other electronic devices via one or more (wired or wireless) network interfaces 420.
  • Exemplary network interfaces 420 include: Bluetooth, Wireless Fidelity (WiFi), and Universal String Line bus (USB, Universal Serial Bus), etc.
  • the parameter processing device of the virtual camera provided by the embodiment of the present application can be implemented in software.
  • Figure 3 shows the parameter processing device 455 of the virtual camera stored in the memory 450, which can be a program, a plug-in, etc.
  • the software in the form of software includes the following software modules: acquisition module 4551, smoothing module 4552 and adjustment module 4553. These modules are logical, so they can be combined or split arbitrarily according to the functions implemented. The functions of each module are explained below.
  • the parameter processing device of the virtual camera provided by the embodiment of the present application can be implemented in hardware.
  • the parameter processing device of the virtual camera provided by the embodiment of the present application can be implemented in the form of a hardware decoding processor.
  • a processor in the form of a hardware decoding processor may adopt one or more Application Specific Integrated Circuits (ASIC, Application Specific Integrated Circuit).
  • ASIC Application Specific Integrated Circuit
  • DSP programmable logic device
  • PLD Programmable Logic Device
  • CPLD Complex Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • a terminal or server can implement the virtual camera parameter processing method provided by embodiments of the present application by running a computer program or computer-executable instructions.
  • the computer program can be a native program in the operating system (for example, a dedicated image deblurring program) or a software module.
  • it can be embedded in any program (such as an instant messaging client, a photo album program, an electronic map client, deblurring module in the navigation client); for example, it can be a local (Native) Application (APP, Application) is a program that needs to be installed in the operating system to run.
  • the computer program described above can be any form of application, module or plug-in.
  • Figure 4 is a schematic flowchart of a parameter processing method for a virtual camera provided by an embodiment of the present application.
  • the method for parameter processing of a virtual camera provided by an embodiment of the present application will be described with reference to steps 101 to 104 shown in Figure 4. It can be implemented by the server or the terminal alone, or by the server and the terminal collaboratively. The following will take the server alone as an example for explanation.
  • step 101 the server obtains camera parameters of the first virtual camera in the virtual scene.
  • the camera parameters may include at least one of an attitude angle, a field of view angle, and a camera position.
  • the first virtual camera has a binding relationship with a physical camera in the real scene, and the physical camera is used to monitor the real scene.
  • the object performs image data collection to obtain image data of the object.
  • the first virtual camera has a binding relationship with the physical camera in the real scene, which means that the first virtual camera and the bound physical camera have the same camera parameters, that is, the camera parameters of the first virtual camera , changes as the camera parameters of the bound physical camera change, and is always the same as the camera parameters of the bound physical camera; in practical applications, the first virtual camera and the bound physical camera have the same camera Parameters, you can send a matching request to the game engine through the physical camera.
  • the game engine adds a virtual camera paired with the physical camera in the virtual scene. During the movement of the physical camera, it sends the camera parameters of the physical camera to the virtual camera in real time.
  • the virtual camera After receiving the camera parameters of the physical camera, real-time synchronization is performed based on the received camera parameters of the physical camera to ensure that the first virtual camera and the bound physical camera have the same camera parameters.
  • the virtual cameras mentioned in this article can be used for virtual shooting.
  • the virtual shooting is shooting through a computer-generated virtual scene. , by setting camera parameters such as the position and angle of the virtual camera, the virtual camera can simulate the camera operation in real shooting.
  • the first virtual camera in this article since the first virtual camera is bound to the physical camera in the real scene, There is a certain relationship, that is, the first virtual camera can be regarded as the digital twin of the physical camera in the real scene bound to it.
  • the first virtual camera may be a virtual camera 2 that has a binding relationship with the physical camera 1 in the real scene.
  • the virtual camera is configured to collect image data of virtual objects in the virtual scene to obtain image data of the virtual objects.
  • the number of virtual cameras in the virtual scene may be at least two, and the at least two virtual cameras include: a virtual camera bound to a physical camera and a virtual camera not bound to a physical camera.
  • the camera parameters include attitude angle, field of view angle and camera position, where the attitude angle (Attitude Angle) includes pitch angle, heading angle and roll angle.
  • attitude angle Attitude Angle
  • Different rotation sequences of the virtual camera will form different coordinate transformation matrices. , usually in the order of heading angle, pitch angle and roll angle, to represent the spatial rotation of the coordinate system of the virtual camera relative to the geographical coordinate system.
  • the size of the field of view (Field Of View) determines the field of view of the virtual camera.
  • the field of view is the angle formed by taking the lens of the virtual camera as the vertex and the two edges of the maximum range through which the object image of the measured target can pass through the lens. .
  • the server before the server obtains the camera parameters of the first virtual camera in the virtual scene, it needs to select one from multiple virtual cameras configured in the virtual scene as the first virtual camera.
  • the server can Select the first virtual camera in the following manner: determine multiple configured virtual cameras in the virtual scene, where each configured virtual camera is bound to a different physical camera; respond to a selection operation for multiple configured virtual cameras , determine the selected virtual camera as the first virtual camera.
  • the selection operation of the above-mentioned virtual camera can be triggered by the user through the client.
  • the client displays the camera icons of multiple virtual cameras for selection, and the user triggers the corresponding camera icon by triggering the camera icon.
  • the selection operation of the virtual camera and then the client sends the information of the selection operation to the server so that the server responds to the selection operation.
  • different real cameras collect data on objects in different real scenes, and by configuring the virtual scene with each true A virtual camera bound to a real camera, thereby achieving virtual and real fusion by using the virtual camera as a bridge between real scenes and virtual scenes.
  • the camera will be selected
  • the virtual camera is determined as the first virtual camera, so as to determine the camera parameters of the virtual camera that need to be smoothed according to the selection.
  • the configuration amount of the physical camera is large, the configuration amount of the virtual camera bound to the physical camera will also increase sharply.
  • the smoothing object is determined, so that the camera parameters of each virtual camera are not smoothed, but the virtual camera is selectively selected, and the camera of the selected virtual camera is Parameters are smoothed, thereby effectively reducing the number of virtual cameras that require camera parameter smoothing, effectively reducing the amount of calculation and improving smoothing efficiency.
  • Figure 6 is a schematic flowchart of a parameter processing method for a virtual camera provided by an embodiment of the present application. Step 101 shown in Figure 6 can be performed by performing the following steps 1011 to step 1013 are implemented.
  • step 1011 the target position of the object in the world coordinate system and the position of the first virtual camera are obtained.
  • the world coordinate system refers to the absolute coordinate system of the system. Before the user coordinate system is established, the coordinates of all points on the screen are determined by the origin of the coordinate system to determine their respective positions.
  • the physical camera and the bound virtual camera are at the same position in the world coordinate system.
  • the physical camera collects image data of objects in the real scene and sends the collected image data to the virtual engine.
  • the number of objects in the real scene can be at least one.
  • different objects can collect image data through the same physical camera, and different objects can also collect image data through different A physical camera is used to collect image data.
  • the multiple objects are in different areas, and each area corresponds to a physical camera, which collects image data of the objects in that area.
  • the object when the number of objects within the field of view of the physical camera bound to the first virtual camera is one, that is, when the number of objects is one, in step 1011 above, the object is obtained in the world coordinate system.
  • the target position can be achieved in the following ways: obtain the coordinates of multiple skeletal points of the object in the world coordinate system; perform a weighted sum of the coordinates of multiple skeletal points to obtain the target position of the object in the world coordinate system.
  • the object in the real scene includes multiple skeletal points. Different skeletal points have different positions in the world coordinate system.
  • the skeletal points are the skeletal support points of the external shape of the object, and the skeletal points are at the turning points of the shape. Plays a vital role in styling.
  • X represents the target position of the object in the world coordinate system
  • a 1 represents the weight corresponding to the object's bone point 1
  • X 2 represents the position of the object's bone point 2 in the world coordinate system
  • a 3 represents the weight corresponding to the object's bone point 3
  • X n represents the position of the object's bone point 3 in the world coordinate system
  • a n It represents the weight corresponding to the object's bone point n
  • X n represents the position of the object's bone point n in the world coordinate system
  • n represents the number of the object's bone points.
  • the weights of the weighted summation corresponding to different bone points are different.
  • the weights of the weighted summation corresponding to the bone points can be specifically set according to the actual situation.
  • the sum of the weights of the weighted summation corresponding to each bone point can be equal to 1.
  • the target position of the object in the world coordinate system is obtained, thereby accurately determining the target position of the object in the world coordinate system, which facilitates subsequent determination of the first virtual camera based on the target position. attitude angle, effectively improving the accuracy of the determined attitude angle of the first virtual camera.
  • the object obtained in the above step 1011 is
  • the target position in the world coordinate system can be achieved in the following way: perform the following processing for each object: obtain the object's position in the world coordinate system Multiple skeletal point coordinates; weighted summation of multiple skeletal point coordinates is performed to obtain the position of the object in the world coordinate system; based on the position of each object in the world coordinate system, the target position is determined, where the target position is the same as the position of each object in the world coordinate system.
  • the distance between locations in the world coordinate system is less than the distance threshold.
  • the distance threshold can be set according to actual application scenarios.
  • the position of each object in the world coordinate system can be determined separately, and the position that is less than the distance threshold from the position of each object in the world coordinate system is determined as the target position. , thereby accurately determining the target position, facilitating subsequent determination of the attitude angle of the first virtual camera based on the target position, and effectively improving the accuracy of the determined attitude angle of the first virtual camera.
  • a target direction vector is determined based on the target position of the object in the world coordinate system and the position of the first virtual camera.
  • the target direction vector is used to indicate the direction in which the first virtual camera points to the object in the world coordinate system.
  • the target direction vector is a vector with the position of the first virtual camera as the starting point in the world coordinate system and the target position of the object in the world coordinate system as the end point.
  • step 1013 the attitude angle of the first virtual camera is determined based on the target direction vector.
  • V represents the target direction vector
  • x represents the horizontal axis component of the target direction vector
  • y represents the vertical axis component of the target direction vector
  • z represents the vertical axis component of the target direction vector
  • the above attitude angle includes a pitch angle and a heading angle; the above step 1013 can be implemented in the following manner: determine the cosine value of the vertical axis component of the target direction vector as the pitch angle of the first virtual camera, and the vertical axis component is the component of the target direction vector on the vertical axis of the world coordinate system; the ratio of the vertical axis component and the horizontal axis component of the target direction vector is determined as the reference ratio, and the vertical axis component is the target direction vector on the vertical axis of the world coordinate system The horizontal axis component is the component of the target direction vector on the horizontal axis of the world coordinate system; the tangent value of the reference ratio is determined as the heading angle of the first virtual camera.
  • the attitude angle of the first virtual camera includes a roll angle
  • the size of the roll angle may be 0.
  • b 1 represents the pitch angle coefficient
  • z represents the vertical axis component of the target direction vector
  • Q represents the pitch angle of the first virtual camera.
  • W represents the heading angle of the first virtual camera
  • b 2 represents the heading angle coefficient
  • x represents the horizontal axis component of the target direction vector
  • y represents the vertical axis component of the target direction vector.
  • step 102 the camera parameters of the first virtual camera are smoothed to obtain the target camera parameters.
  • the camera parameters of the first virtual camera include at least one of an attitude angle, a field of view angle, and a camera position
  • the attitude angle includes a pitch angle, a heading angle, and a roll angle.
  • the above step 102 can be implemented by performing at least one of the following processes: smoothing the attitude angle of the first virtual camera to obtain the target attitude angle; smoothing the field of view angle of the first virtual camera to obtain the target field of view angle; The camera position of the first virtual camera is smoothed to obtain the target camera position.
  • the above-mentioned smoothing processing of the attitude angle of the first virtual camera to obtain the target attitude angle parameter can be implemented in the following manner: smoothing the pitch angle of the first virtual camera to obtain the target pitch angle;
  • the heading angle of the virtual camera is smoothed to obtain the target heading angle;
  • the roll angle of the first virtual camera is smoothed to obtain Get the target roll angle.
  • smoothing processing refers to a processing method that reduces the gap between two adjacent parameters to be smoothed at the smoothing moment to achieve the effect of parameter smoothing.
  • the first virtual The camera has multiple smooth moments. Each smooth moment corresponds to an attitude angle of the first virtual camera. The difference between the attitude angles of any adjacent smooth moments is reduced to the attitude angle difference threshold to realize the control of the first virtual camera. Smoothing of attitude angles.
  • the first virtual camera has n smooth moments, where n is a positive integer greater than 1.
  • Figure 6 is a schematic flowchart of a virtual camera parameter processing method provided by an embodiment of the present application. Step 102 shown in Figure 6 can be implemented by executing the following steps 1021 to 1022.
  • step 1021 when the camera parameters of the first virtual camera include the camera parameters of the n-th smoothing moment, the smoothing index and the n-1th target camera parameter are obtained.
  • the camera parameters at the nth smoothing moment are camera parameters before smoothing the camera parameters of the first virtual camera at the nth smoothing moment.
  • Smoothness index used to indicate the smoothness of camera parameters.
  • step 1022 based on the smoothing index and the n-1th target camera parameter, the camera parameter at the nth smoothing moment is smoothed to obtain the nth target camera parameter, and the nth target camera parameter is used as the target camera parameter.
  • the smoothness index is between 0 and 1.
  • the smoothness index is used to indicate the smoothness of the camera parameters. The higher the smoothness index, the higher the smoothness of the corresponding camera parameters.
  • the smoothness index can be set. Set according to different application scenarios.
  • the smoothing index is between 0 and 1
  • the above step 1022 can be implemented in the following manner: determine the product of the camera parameter at the n-th smoothing moment and the smoothing index as the first reference parameter; The product of 1 target camera parameter and the supplementary smoothing index is determined as the second reference parameter, where the supplementary smoothing index is the difference between the smoothing index and 1; add the first reference parameter and the second reference parameter to obtain the nth target camera parameters, and use the nth target camera parameter as the target camera parameter.
  • T 1 represents the first reference parameter
  • k 1 represents the smoothing index
  • ⁇ 1 represents the camera parameter at the nth smoothing moment.
  • T 2 represents the second reference parameter
  • k 1 represents the smoothing index
  • ⁇ n-1 represents the n-1th target camera parameter
  • 1-k 1 represents the supplementary smoothing index
  • the expression of the nth target camera parameter may be:
  • ⁇ n represents the nth target camera parameter
  • T 1 represents the first reference parameter
  • T 2 represents the second reference parameter
  • the attitude angles include pitch angles, heading angles, and roll angles.
  • the target angle can also be locked in the following manner: in response to a locking instruction for the target angle in the attitude angle, the target angle is locked; wherein the target angle includes at least one of the pitch angle, the heading angle, and the roll angle. one. do
  • the pitch angle and the heading angle are locked in response to a locking command for the pitch angle and the heading angle in the attitude angle.
  • the pitch angle is locked in response to a lock instruction for the pitch angle in the attitude angle.
  • the pitch angle and the roll angle are locked in response to a lock instruction for the pitch angle and the roll angle in the attitude angle.
  • the target angle is locked to prevent the target angle from being smoothed.
  • the above step 102 can also be implemented in the following manner: smoothing the portion of the attitude angle except the target angle to obtain the target camera parameters.
  • smoothing the parts of the attitude angle except the target angle different attitude angles can be gradually smoothed or partially smoothed, thereby ensuring the controllability of the attitude angle smoothing process and satisfying various application scenarios.
  • progressive smoothing can be achieved, which reduces the error rate in the smoothing process and improves the accuracy of smoothing.
  • Figure 7 is a schematic flowchart of a parameter processing method for a virtual camera provided by an embodiment of the present application. Step 102 shown in Figure 7 can be performed by performing the following steps 1023 to step 1025 are implemented.
  • step 1023 the data type of the attitude angle is obtained, where the data type includes a quaternion type and a Euler angle type.
  • the Euler angle type attitude angle for which the Euler angle is a set of three independent angular parameters used to determine the position of the virtual camera, consists of a nutation angle, a precession angle, and a rotation angle.
  • the attitude angle of the quaternion type where the quaternion is a non-commutative extension of the complex number, if the set of quaternions is considered as a multi-dimensional real number space, the quaternion represents a four-dimensional space, Relative to complex numbers, it is a two-dimensional space.
  • the expression for the attitude angle of type quaternion can be: ai+bj+ck+d (8)
  • a, b, c, d represent each element in the quaternion
  • i, j, k represent the imaginary unit in the quaternion
  • step 1024 when the data type is a quaternion type, each element in the attitude angle of the quaternion type is smoothed to obtain a reference attitude angle of the quaternion type.
  • FIG. 8 is a schematic diagram of the principle of a virtual camera parameter processing method provided by an embodiment of the present application.
  • each (one by one) element in the attitude angle of the quaternion type is smoothed to obtain a reference attitude angle of the quaternion type.
  • each element (a, b, c, d) in the attitude angle ai+bj+ck+d of the quaternion type is smoothed to obtain a quaternion Type of reference attitude angle.
  • step 1025 the reference attitude angle of the quaternion type is converted into a data type to obtain a reference attitude angle of the Euler angle type, and the reference attitude angle of the Euler angle type is determined as the target attitude angle.
  • the reference attitude angle of the quaternion type is converted into data type to obtain the reference attitude angle of the Euler angle type, and the reference attitude angle of the Euler angle type is determined as the target attitude angle.
  • different types of attitude angles are converted and smoothed through data types based on attitude angles, thereby effectively improving the universality of attitude angle smoothing processing.
  • step 103 based on the target camera parameters, the camera parameters of the second virtual camera in the virtual scene are adjusted to obtain an adjusted second virtual camera.
  • the focus of the second virtual camera corresponds to the focus of the first virtual camera.
  • the second virtual camera needs to be configured in the virtual scene.
  • the focus of the second virtual camera corresponding to the focus of the first virtual camera may mean that the distance between the focus of the second virtual camera and the focus of the first virtual camera is less than a focus distance threshold (which may be based on actual Need to be set), or the focus of the second virtual camera coincides with the focus of the first virtual camera. For example, when the distance between the focus of the second virtual camera and the focus of the first virtual camera is 0, the focus of the second virtual camera coincides with the position of the focus of the first virtual camera.
  • a focus distance threshold which may be based on actual Need to be set
  • the second virtual camera needs to maintain the same perspective relationship as the first virtual camera by setting the focus of the second virtual camera to a distance less than a focus distance threshold from the focus of the first virtual camera, that is, The focus position of the first virtual camera is close to the focus position of the second virtual camera, thus ensuring that the second virtual camera and the first virtual camera Keep the same perspective relationship.
  • the second virtual camera can always follow the shooting direction of the first virtual camera to shoot, thereby achieving
  • the second virtual camera can be used to replace the first virtual camera for image rendering, that is, the adjusted second virtual camera Based on the image data, an image of the virtual scene including the object is rendered.
  • the adjusted camera parameters of the second virtual camera are target camera parameters, that is, adjusting the camera parameters of the second virtual camera includes: setting the camera parameters of the second virtual camera as the target camera parameters.
  • adjusting the camera parameters of the second virtual camera in the virtual scene can be implemented in the following manner: adjusting the current camera parameters of the second virtual camera to the target camera parameters to obtain the adjusted second virtual camera. .
  • the above-mentioned adjustment of the current camera parameters of the second virtual camera to the target camera parameters, and obtaining the adjusted second virtual camera can be achieved in the following manner: based on the nth target camera parameter, the second virtual camera is adjusted to the target camera parameters. The camera parameters at the n-1th smoothing moment are adjusted to the nth target camera parameters to obtain the adjusted second virtual camera.
  • the following processing may also be performed: in response to an adjustment instruction for the target camera parameters, adjust the target camera parameters to obtain an adjusted second virtual camera; that is, After adjusting the camera parameters of the second virtual camera to the target camera parameters, the user can adjust the target camera parameters of the second virtual camera by triggering the adjustment instruction. For example, when the target camera parameters include the target field of view angle, the user can By triggering the adjustment instruction, the target field of view angle of the second virtual camera is adjusted.
  • the adjusted second virtual camera is used to render an image of the virtual scene including the object based on the image data.
  • the camera parameters of the first virtual camera are smoothed to obtain the target camera parameters, and based on the target camera parameters, the camera parameters of the second virtual camera are smoothed.
  • Adjust to obtain the adjusted second virtual camera Through the adjusted second virtual camera, the image data collected by the physical camera is rendered to obtain an image of the virtual scene including the object; during the picture rendering process of combining virtual and real, due to the A virtual camera has a binding relationship with a physical camera in a real scene, so that the first virtual camera and the physical camera have the same camera parameters, so that the smoothing of the camera parameters of the first virtual camera is equivalent to the smoothing of the physical camera.
  • the parameters are smoothed, and the target camera parameters obtained by smoothing are transferred to the second virtual camera by configuring a second virtual camera corresponding to the focus of the first virtual camera.
  • the physical camera in the real scene does not need to be With the assistance of the hardware stabilizer, even if the physical camera shakes, the camera parameters of the second virtual camera can remain stable, which effectively improves the stability performance of the virtual camera and saves the hardware cost of installing a hardware stabilizer on the physical camera. , thereby significantly reducing hardware costs.
  • FIG. 9 is a schematic flowchart of a virtual camera parameter processing method provided by an embodiment of the present application. It will be described in conjunction with steps 201 to 204 shown in FIG. 9 .
  • the following steps 201 to 204 are as follows:
  • the execution subject of step 204 may be a server or a terminal, or may be implemented collaboratively by a server and a terminal. The following will take the execution subject as a server as an example for explanation.
  • step 201 the camera parameters of the third virtual camera are obtained.
  • the third virtual camera does not have a binding relationship with the physical camera in the real scene.
  • the camera parameters of the third virtual camera include at least one of attitude angle, field of view angle and camera position.
  • the first virtual camera does not have a binding relationship with the physical camera in the real scene.
  • the physical camera is used for Collect image data of objects in real scenes to obtain image data of the objects.
  • a virtual camera is used to collect image data of virtual objects in the virtual scene to obtain image data of the virtual objects.
  • the camera parameters of the third virtual camera include at least one of attitude angle, field of view angle and camera position, Attitude angle includes pitch angle, heading angle and roll angle. Different rotation sequences of the virtual camera will form different coordinate transformation matrices.
  • the coordinate system of the virtual camera relative to the geographical coordinate system is usually expressed in the order of heading angle, pitch angle and roll angle. space rotation.
  • the size of the field of view determines the field of view of the virtual camera.
  • the field of view is the angle formed by taking the lens of the virtual camera as the vertex and the two edges of the maximum range through which the object image of the measured target can pass through the lens.
  • the third virtual camera may also be selected in the following manner: determining multiple configured third virtual cameras in the virtual scene; responding to a request for the multiple configured third virtual cameras. Select the operation to determine the selected third virtual camera as the above-mentioned third virtual camera.
  • the above-mentioned selection operation of the third virtual camera can be triggered by the user through the client.
  • the client displays camera icons of multiple virtual cameras for selection, and the user triggers the corresponding camera icon by triggering the camera icon.
  • the virtual camera selects an operation, and then the client sends the information of the selection operation to the server, so that the server responds to the selection operation, thereby determining the virtual camera selected by the user as the third virtual camera.
  • Figure 10 is a schematic flowchart of a parameter processing method for a virtual camera provided by an embodiment of the present application. Step 201 shown in Figure 10 This can be achieved by performing the following steps 2011 to 2013.
  • step 2011 the position parameter of the focus position of the third virtual camera is obtained.
  • the position parameter of the focus position of the third virtual camera is the position coordinate of the third virtual camera in the world coordinate system.
  • the world coordinate system refers to the absolute coordinate system of the system. Before the user coordinate system is established, the coordinates of all points on the screen are determined by the origin of the coordinate system to determine their respective positions.
  • step 2012 the direction vector of the third virtual camera is determined based on the position parameter of the focus position and the position parameter of the third virtual camera.
  • the direction vector of the third virtual camera is used to indicate the direction pointed by the third virtual camera in the world coordinate system.
  • the direction vector of the third virtual camera is a vector starting from the position of the third virtual camera and ending with the focus position of the third virtual camera in the world coordinate system.
  • step 2013, the attitude angle of the third virtual camera is determined based on the direction vector of the third virtual camera.
  • T represents the direction vector of the third virtual camera
  • x1 represents the horizontal axis component of the direction vector of the third virtual camera
  • y1 represents the vertical axis component of the direction vector of the third virtual camera
  • z1 represents the direction vector of the third virtual camera.
  • the above attitude angle includes a pitch angle and a heading angle; the above step 2013 can be implemented in the following manner: determining the cosine value of the vertical axis component of the direction vector of the third virtual camera as the pitch angle of the third virtual camera , the vertical axis component is the component of the direction vector of the third virtual camera on the vertical axis of the world coordinate system; the ratio of the vertical axis component and the horizontal axis component of the direction vector of the third virtual camera is determined as the reference ratio, the vertical axis component is the component of the target direction vector on the vertical axis of the world coordinate system, and the horizontal axis component is the component of the target direction vector on the horizontal axis of the world coordinate system; the tangent value of the reference ratio is determined as the heading angle of the third virtual camera.
  • the attitude angle of the third virtual camera includes a roll angle, and the size of the roll angle may be 0.
  • b 3 represents the pitch angle coefficient
  • z1 represents the vertical axis component of the direction vector of the third virtual camera
  • Q2 represents the pitch angle of the third virtual camera.
  • W2 represents the heading angle of the third virtual camera
  • b 4 represents the heading angle coefficient
  • x1 represents the direction of the third virtual camera.
  • y1 represents the vertical axis component of the direction vector of the third virtual camera.
  • Figure 11 is a schematic flowchart of a parameter processing method for a virtual camera provided by an embodiment of the present application. Step 201 shown in Figure 11 can be performed by executing the following Steps 2014 to 2015 are implemented.
  • step 2014 when there is a virtual object within the field of view of the third virtual camera, a virtual distance is obtained.
  • the virtual distance is the distance between the first position and the second position.
  • the first position is the third virtual camera in the world coordinate system.
  • the second position is the position of the virtual object in the world coordinate system.
  • a virtual distance between the position of the third virtual camera in the world coordinate system and the position of the virtual object in the world coordinate system is obtained.
  • step 2015 the field of view angle of the third virtual camera is determined based on the virtual distance, where the value of the virtual distance is proportional to the value of the field of view angle.
  • the field of view angle of the third virtual camera is determined by obtaining the numerical value of the virtual distance.
  • the numerical value of the field of view angle of the third virtual camera decreases accordingly.
  • the field of view angle of the third virtual camera decreases.
  • the value is large, the numerical value of the field of view angle of the third virtual camera increases accordingly.
  • the field of view angle of the third virtual camera is dynamically controlled through the numerical value of the virtual distance, so that the field of view angle of the third virtual camera changes accordingly as the numerical value of the virtual distance changes, thereby realizing the third virtual camera's field of view angle.
  • the dynamic control of the field of view realizes the automatic push, pull and pan movement of the field of view of the third virtual camera, effectively improving the lens movement effect of the third virtual camera.
  • step 202 the camera parameters of the third virtual camera are smoothed to obtain the target camera parameters of the third virtual camera.
  • the camera parameters of the third virtual camera include at least one of an attitude angle, a field of view angle, and a camera position
  • the attitude angle includes a pitch angle, a heading angle, and a roll angle.
  • the above step 202 can be implemented by performing at least one of the following processes: smoothing the attitude angle of the third virtual camera to obtain the target attitude angle parameter; smoothing the field of view angle of the third virtual camera to obtain the target field of view parameter. ; Smooth the camera position of the third virtual camera to obtain the target camera position parameters.
  • the above-mentioned smoothing processing of the attitude angle of the third virtual camera to obtain the target attitude angle parameter can be implemented in the following manner: smoothing the pitch angle of the third virtual camera to obtain the target pitch angle parameter;
  • the heading angles of the three virtual cameras are smoothed to obtain the target heading angle parameters;
  • the roll angle of the third virtual camera is smoothed to obtain the target roll angle parameters.
  • smoothing processing refers to a processing method that reduces the gap between two adjacent parameters to be smoothed at the smoothing moment to achieve parameter smoothing.
  • the third virtual camera has n smooth moments, where n is a positive integer greater than 1.
  • the above step 202 can be implemented in the following manner: when the camera parameters of the third virtual camera include the camera parameters of the n-th smoothing moment, obtain the smoothing index and the n-1th target camera parameter. Based on the smoothing index and the n-1th target camera parameter, the camera parameters at the nth smoothing moment are smoothed to obtain the nth target camera parameter, and the nth target camera parameter is used as the target camera parameter.
  • the camera parameters at the nth smoothing moment are camera parameters before smoothing the camera parameters of the third virtual camera at the nth smoothing moment.
  • the smoothness index is used to indicate the smoothness of the camera parameters.
  • the n-1th target camera parameter is the target camera parameter obtained by smoothing the camera parameters of the third virtual camera at the n-1th smoothing moment.
  • the smoothness index is between 0 and 1.
  • the smoothness index is used to indicate the smoothness of the camera parameters. The higher the smoothness index, the higher the smoothness of the corresponding camera parameters.
  • the specific setting of the smoothness index It can be specifically set according to different application scenarios.
  • the smoothing index is between 0 and 1.
  • the camera parameters at the nth smoothing moment are smoothed to obtain the nth target camera parameter, and
  • the nth target camera parameter can be used as the target camera parameter in the following ways: determine the product of the nth smoothing moment camera parameter and the smoothing index as the third reference parameter; determine the product of the n-1th target camera parameter and the supplementary smoothing index. , determined as the fourth reference parameter, where the supplementary smoothing index is the difference between the smoothing index and 1; add the third reference parameter and the fourth reference parameter to obtain the nth target camera parameter, and use the nth target camera parameter as Target camera parameters.
  • T 3 represents the third reference parameter
  • k 3 represents the smoothing index
  • ⁇ n represents the camera parameter at the nth smoothing moment.
  • T 2 represents the fourth reference parameter
  • k 3 represents the smoothing index
  • ⁇ n-1 represents the n-1th target camera parameter
  • 1-k 3 represents the supplementary smoothing index
  • ⁇ n represents the nth target camera parameter
  • T 3 represents the third reference parameter
  • T 4 represents the fourth reference parameter.
  • a fourth virtual camera is configured in the virtual scene, and the focus of the fourth virtual camera corresponds to the focus of the third virtual camera.
  • the meaning that the focus of the fourth virtual camera corresponds to the focus of the third virtual camera may be that the distance between the focus of the fourth virtual camera and the focus of the third virtual camera is less than the focus distance threshold. That is, the distance between the focus of the fourth virtual camera and the focus of the third virtual camera may be 0. When the distance between the focus of the fourth virtual camera and the focus of the third virtual camera is 0, the distance between the focus of the fourth virtual camera and the focus of the third virtual camera is 0. The focus coincides with the position of the focus of the third virtual camera.
  • the fourth virtual camera needs to maintain the same perspective relationship with the third virtual camera by setting the focus of the fourth virtual camera to a distance less than the focus distance threshold from the focus of the third virtual camera, that is, The focus position of the third virtual camera is close to the focus position of the fourth virtual camera, thereby ensuring that the fourth virtual camera and the third virtual camera maintain the same perspective relationship.
  • the fourth virtual camera by configuring the fourth virtual camera whose distance from the focus of the third virtual camera is less than the focus distance threshold in the virtual scene, the fourth virtual camera always follows the shooting direction of the third virtual camera to shoot, thereby achieving A fourth virtual camera having the same shooting function and the same perspective relationship as the third virtual camera is configured in the virtual scene, and the image of the virtual scene including the object is obtained by rendering based on the image data instead of the third virtual camera.
  • step 204 based on the target camera parameters of the third virtual camera, the camera parameters of the fourth virtual camera are adjusted to obtain an adjusted fourth virtual camera.
  • the adjusted fourth virtual camera is used to render an image of the virtual scene.
  • the target camera parameters of the third virtual camera include at least one of a target attitude angle parameter of the third virtual camera, a target field of view angle parameter of the third virtual camera, and a target position parameter of the third virtual camera.
  • the above step 204 can be implemented in the following manner: based on the target attitude angle parameter of the third virtual camera and the target field of view angle of the third virtual camera. At least one of the parameters and the target position parameter of the third virtual camera is used to adjust the camera parameters of the fourth virtual camera to obtain the adjusted fourth virtual camera.
  • the above step 204 can be implemented in the following manner: adjusting the current camera parameters of the fourth virtual camera to the target camera parameters to obtain the adjusted fourth virtual camera.
  • the above-mentioned adjustment of the current camera parameters of the fourth virtual camera to the target camera parameters, and obtaining the adjusted fourth virtual camera can be achieved in the following manner: based on the nth target camera parameter, changing the parameters of the fourth virtual camera The camera parameters at the n-1th smoothing moment are adjusted to the nth target camera parameters to obtain the adjusted fourth virtual camera.
  • the following processing may also be performed: in response to an adjustment instruction for the target camera parameters, adjust the target camera parameters to obtain an adjusted fourth virtual camera.
  • the image rendered by the fourth virtual camera can also be caused to produce a jitter effect in the following manner: in response to an instruction for adding a jitter parameter of the adjusted fourth virtual camera, in the camera of the adjusted fourth virtual camera A dither parameter is added to the parameters so that the image rendered by the fourth virtual camera with the added dither parameter produces a shaking effect.
  • the reverse application of anti-shake processing is realized, which can cause the image rendered by the fourth virtual camera to produce a shaking effect, simulating an earthquake in a real scene.
  • the effect makes the picture rendered by the adjusted fourth virtual camera more realistic.
  • the camera parameters of the first virtual camera are smoothed to obtain the target camera parameters, and based on the target camera parameters, the camera parameters of the second virtual camera are smoothed.
  • Adjust to obtain the adjusted second virtual camera Through the adjusted second virtual camera, the image data collected by the physical camera is rendered to obtain an image of the virtual scene including the object; during the picture rendering process of combining virtual and real, due to the A virtual camera has a binding relationship with a physical camera in a real scene, so that the first virtual camera and the physical camera have the same camera parameters, so that the smoothing of the camera parameters of the first virtual camera is equivalent to the smoothing of the physical camera.
  • the parameters are smoothed, and the target camera parameters obtained by smoothing are transferred to the second virtual camera by configuring a second virtual camera corresponding to the focus of the first virtual camera.
  • the physical camera in the real scene does not need to be With the assistance of the hardware stabilizer, even if the physical camera shakes, the camera parameters of the second virtual camera can remain stable, which effectively improves the stability performance of the virtual camera and saves the hardware cost of installing a hardware stabilizer on the physical camera. , thereby significantly reducing hardware costs.
  • virtual objects and the real world can be displayed in the same line of sight.
  • the visual perception of the virtual and real fusion can be effectively improved, for example, in live broadcast and video production.
  • the tracking of physical objects i.e., the objects in the real scene described above
  • the moving mirror effect improves the picture quality.
  • high-quality camera effects can be achieved.
  • physical objects (characters) can stand in the starry sky (in a virtual scene).
  • the embodiment of the present application smoothes the camera parameters of the original virtual and real fusion camera in the virtual scene (i.e., the first virtual camera described above) to obtain smoothed camera parameters, and configures the smoothed camera parameters to the smoothing
  • the virtual camera i.e., the second virtual camera described above
  • Embodiments of the present application can configure a smooth virtual camera in the virtual scene in an application scenario of virtual and real fusion, and the focus coordinates of the smooth virtual camera coincide with the focus coordinates of the original virtual and real fusion camera in the virtual scene.
  • the smooth virtual camera can be The direction is consistent with the direction of the original virtual fusion camera in the virtual scene, thereby realizing functions such as automatic tracking and anti-shake.
  • the perspective relationship is basically correct. That is, the focal distance between the smooth virtual camera and the original virtual and real fusion camera is less than the distance threshold, and the distance threshold can be specifically set according to the application scenario.
  • FIG. 12 is a schematic diagram of the principle of a virtual camera parameter processing method provided by an embodiment of the present application.
  • the original virtual camera in the virtual scene can be a virtual camera selected by the user.
  • the original virtual camera can be a virtual camera that has a binding relationship with a physical camera, or a virtual camera that has no binding relationship with a physical camera.
  • Original The camera parameters of the original virtual camera include attitude angle, field of view angle and three-dimensional coordinates.
  • the camera parameters of the original virtual camera are smoothed.
  • the smoothed camera parameters can be obtained in the following ways: the attitude angle, field of view angle and three-dimensional coordinates are obtained. Smoothing is performed separately to obtain the smoothed attitude angle, smoothed field of view angle and smoothed three-dimensional coordinates. Next, the smoothing processing of the attitude angle, field of view angle and three-dimensional coordinates will be described separately with reference to Figure 12 .
  • Figure 12 is a schematic diagram of the principle of the parameter processing method of the virtual camera provided by the embodiment of the present application.
  • determine the virtual camera to be smoothed i.e., the first virtual camera described above
  • call the attitude angle smoothing module to smooth the camera parameters of the virtual camera to be smoothed, and obtain Smoothed attitude angle.
  • the original virtual camera in the virtual scene may be a virtual camera selected by the user.
  • the original virtual camera may be a virtual camera that has a binding relationship with a physical camera, or it may be a virtual camera that does not have a binding relationship with a physical camera. camera.
  • FIG. 5 is a schematic diagram of the principle of a virtual camera parameter processing method provided by an embodiment of the present application.
  • the original virtual cameras in the virtual scene include virtual camera 2, virtual camera 3 and virtual camera 4, where virtual camera 2 can be a virtual camera bound to physical camera 1, physical camera 1 is a camera that exists in the real world, and virtual camera 3 may be a virtual camera looking at virtual camera 2, and virtual camera 4 may be any virtual camera selected based on a selection operation. Virtual camera 3 and virtual camera 4 have no binding relationship with the physical camera 1.
  • the virtual camera to be smoothed is determined, when the virtual camera to be smoothed is a virtual camera bound to a physical camera (ie, the first virtual camera described above) , determine the coordinates of each three-dimensional skeletal point 46 in the world coordinate system based on the virtual fusion object module 42; determine the spatial position 44 of the virtual camera bound to the physical camera based on the virtual and real fusion camera module 43, based on the coordinates of each three-dimensional skeletal point and the physical The position of the virtual camera bound to the camera determines the attitude angle 47 of the virtual camera bound to the physical camera; the attitude angle smoothing module is called to smooth the attitude angle 47 of the virtual camera bound to the physical camera to obtain the smoothed attitude horn.
  • a virtual camera bound to a physical camera ie, the first virtual camera described above
  • the coordinates of each three-dimensional skeletal point in the world coordinate system can be obtained by weighted averaging, and the weighted average is used as the object's coordinate in the virtual world. three-dimensional coordinates.
  • v represents the target vector
  • x represents the component of the target vector in the horizontal axis direction of the world coordinate system
  • y represents the component of the target vector in the vertical axis direction of the world coordinate system
  • z represents the component of the target vector in the vertical axis direction of the world coordinate system.
  • the attitude angle of the virtual camera bound to the physical camera includes roll angle, pitch angle and heading angle.
  • G2 asin(z) (17)
  • G3 atan2(y,x) (18)
  • G1 represents the roll angle
  • G2 represents the pitch angle
  • G3 represents the heading angle
  • the attitude angle in the attitude angle smoothing module, can be smoothed in the following manner: calling a filter to smooth the attitude angle to obtain a smoothed attitude angle.
  • ⁇ n represents the smoothed attitude angle at time n
  • ⁇ n-1 represents the smoothed attitude angle at time n-1
  • ⁇ n represents the attitude angle before smoothing at time n
  • k 1 represents the attitude angle smoothing index
  • the attitude angle smoothing index represents the degree to which the attitude angle is smoothed.
  • the virtual camera to be smoothed is determined.
  • the virtual camera to be smoothed is the virtual camera 49 whose attitude angle is set through the mouse and touch operation 48 (i.e., as described above (the third virtual camera), obtain the set attitude angle, call the attitude angle smoothing module, smooth the acquired attitude angle, and obtain the smoothed attitude angle.
  • the virtual camera to be smoothed is determined.
  • the virtual camera to be smoothed is a virtual camera looking in the direction of the virtual camera bound to the physical camera, the looking direction and the physical camera are obtained.
  • the attitude angle 45 of the virtual camera in the direction of the virtual camera bound to the camera is called.
  • the attitude angle smoothing module is used to smooth the attitude angle 45 to obtain the smoothed attitude angle.
  • the following describes the smoothing process of the attitude angle smoothing module. Determine whether the attitude angle is a quaternion type attitude angle.
  • the attitude angle is a quaternion type attitude angle
  • the attitude angle is a Euler angle type attitude angle
  • the Euler angle type attitude angle is smoothed element by element to obtain a smoothed quaternion type attitude angle.
  • FIG. 13 is a schematic diagram of the principle of the virtual camera parameter processing method provided by the embodiment of the present application.
  • the virtual camera to be smoothed is determined.
  • the real-time calibration module 50 determines the virtual camera bound to the physical camera based on virtual and real fusion.
  • the field of view angle of the camera when the virtual camera to be smoothed is not a virtual camera bound to a physical camera, obtain the field of view angle set through mouse and touch operations; call the field of view angle real-time smoothing module to perform the field of view angle Smoothing is performed to obtain the smoothed attitude angle.
  • the field of view angle in the field of view angle real-time smoothing module, can be smoothed in the following manner: calling a time domain filter to smooth the input field of view angle to obtain the field of view angle real-time smoothing module. output.
  • ⁇ n represents the smoothed field of view angle at time n
  • ⁇ n-1 represents the smoothed field angle at time n-1
  • ⁇ n represents the field of view angle before smoothing at time n
  • k 2 represents the smoothing index of the field of view angle.
  • the viewing angle smoothing index represents the degree to which the viewing angle is smoothed.
  • FIG 14 is a schematic diagram of the principle of the virtual camera parameter processing method provided by the embodiment of the present application.
  • the virtual camera to be smoothed is determined.
  • the real-time calibration module 51 determines the virtual camera bound to the physical camera based on virtual and real fusion.
  • ⁇ n represents the three-dimensional coordinates of the camera after smoothing at time n
  • ⁇ n-1 represents the three-dimensional coordinates of the camera after smoothing at time n-1
  • ⁇ n represents the three-dimensional coordinates of the camera after smoothing at time n
  • k 3 represents the three-dimensional coordinate smoothing index
  • the three-dimensional coordinate smoothing index represents the degree to which the field of view angle is smoothed.
  • posture angle shake can be added to the camera of the virtual stabilizer at a later stage (reversely utilizing the anti-shake feature) to simulate the effects of camera shake caused by earthquakes and hand-held camera shake, thereby improving the quality of the lens.
  • the smoothing processing method is not limited to the above-mentioned first-order filter (Infinite Impulse Response, IIR). Filters of other orders, or filters such as Kalman filtering, can also implement smoothing processing.
  • IIR Infinite Impulse Response
  • the field of view can be automatically set according to the size of the virtual object projected onto the imaging surface of the virtual camera, thereby realizing automatic push-pull and pan movement of the virtual camera.
  • the virtual camera parameter processing device 455 is stored in the memory 450.
  • the software module in may include: an acquisition module 4551, configured to acquire the camera parameters of the first virtual camera.
  • the first virtual camera has a binding relationship with the physical camera in the real scene, and the physical camera is used to perform operations on objects in the real scene.
  • Image data is collected to obtain the image data of the object;
  • the smoothing module 4552 is configured to smooth the camera parameters of the first virtual camera to obtain the target camera parameters;
  • the adjustment module 4553 is configured to perform smoothing processing on the camera parameters of the first virtual camera based on the target camera parameters.
  • the camera parameters of the two virtual cameras are adjusted to obtain an adjusted second virtual camera.
  • the focus of the second virtual camera corresponds to the focus of the first virtual camera; wherein the adjusted second virtual camera is configured to be image-based.
  • Data rendering results in an image of the virtual scene including the object.
  • the above device further includes a configuration module configured to configure the second virtual camera in the virtual scene.
  • the above-mentioned acquisition module 4551 is also configured to obtain the target position of the object in the world coordinate system and the position of the first virtual camera; based on the target position of the object in the world coordinate system position and the position of the first virtual camera to determine the target direction vector.
  • the target direction vector is used to indicate the direction in which the first virtual camera points to the object in the world coordinate system; based on the target direction vector, the attitude angle of the first virtual camera is determined.
  • the attitude angle includes a pitch angle and a heading angle; the above-mentioned acquisition module 4551 is also configured to determine the cosine value of the vertical axis component of the target direction vector as the pitch angle of the first virtual camera, and the vertical axis component is the target The component of the direction vector on the vertical axis of the world coordinate system; determine the ratio of the vertical axis component and the horizontal axis component of the target direction vector as the reference ratio.
  • the vertical axis component is the component of the target direction vector on the vertical axis of the world coordinate system.
  • the horizontal axis component is the component of the target direction vector on the horizontal axis of the world coordinate system; the tangent value of the reference ratio is determined as the heading angle of the first virtual camera.
  • the above-mentioned acquisition module 4551 is also configured to obtain the coordinates of multiple skeletal points of the object in the world coordinate system; perform a weighted summation of the coordinates of the multiple skeletal points to obtain the coordinates of the object in the world coordinate system.
  • the target position in the world coordinate system is also configured to obtain the coordinates of multiple skeletal points of the object in the world coordinate system; perform a weighted summation of the coordinates of the multiple skeletal points to obtain the coordinates of the object in the world coordinate system.
  • the above-mentioned acquisition module 4551 is also configured to perform the following processing for each object: obtain multiple skeletal point coordinates of the object in the world coordinate system; The point coordinates are weighted and summed to obtain the position of the object in the world coordinate system; based on the position of each object in the world coordinate system, the target position is determined, where the distance between the target position and the position of each object in the world coordinate system is less than the distance threshold.
  • the first virtual camera has n smooth moments, where n is a positive integer greater than 1; the above-mentioned smoothing module 4552 is also configured to when the camera parameters of the first virtual camera include the camera parameters of the nth smooth moment, Obtain the smoothing index and the n-1th target camera parameter; where the smoothing index is used to indicate the smoothness of the camera parameters; the n-1th target camera parameter is, at the n-1th smoothing moment, the value of the first virtual camera
  • the smoothing index is between 0 and 1.
  • the above-mentioned smoothing module 4552 is also configured to determine the product of the camera parameter at the n-th smoothing moment and the smoothing index as the first reference parameter; The product of the target camera parameter and the supplementary smoothing index is determined as the second reference parameter, where the supplementary smoothing index is the difference between the smoothing index and 1; add the first reference parameter and the second reference parameter to obtain the nth target camera parameter , and use the nth target camera parameter as the target camera parameter.
  • the parameter processing device of the virtual camera when the camera parameters include an attitude angle, the attitude angle includes a pitch angle, a heading angle, and a roll angle; the parameter processing device of the virtual camera also includes: a locking module configured to respond to the target angle in the attitude angle. lock command to lock the target angle; where the target angle includes at least one of the pitch angle, heading angle and roll angle; the above-mentioned smoothing module is also configured to smooth the part of the attitude angle other than the target angle, and obtain Target camera parameters.
  • the above-mentioned adjustment module 4553 is also configured to adjust the current camera parameters of the second virtual camera to the target camera parameters to obtain the adjusted second virtual camera;
  • the parameter processing device of the above-mentioned virtual camera also includes:
  • the instruction adjustment module is configured to adjust the target camera parameters in response to the adjustment instruction for the target camera parameters to obtain an adjusted second virtual camera.
  • the above-mentioned smoothing module 4552 is also configured to obtain the data type of the attitude angle, where the data type includes quaternion type and Euler angle type; when the data type is quaternion When the number type is used, smooth each element in the attitude angle of the quaternion type to obtain the reference attitude angle of the quaternion type; perform data type conversion on the reference attitude angle of the quaternion type to obtain the Euler angle type The reference attitude angle of the Euler angle type is determined as the target attitude angle.
  • the above-mentioned virtual camera parameter processing device further includes: a selection module configured to determine multiple configured virtual cameras in the virtual scene, wherein each configured virtual camera is bound to a different physical camera. ; In response to selection operations for multiple configured virtual cameras, determine the selected virtual camera as the first virtual camera.
  • the above-mentioned virtual camera parameter processing device further includes: a second acquisition module configured to acquire the camera parameters of the third virtual camera.
  • the third virtual camera does not have a binding relationship with the physical camera in the real scene;
  • the second smoothing module is configured to smooth the camera parameters of the third virtual camera to obtain the target camera parameters of the third virtual camera;
  • the second configuration module is configured to configure the fourth virtual camera in the virtual scene.
  • the focus corresponds to the focus of the third virtual camera;
  • the second adjustment module is configured to adjust the camera parameters of the fourth virtual camera based on the target camera parameters of the third virtual camera to obtain the adjusted fourth virtual camera; where , the adjusted fourth virtual camera is configured to render an image of the virtual scene.
  • the above-mentioned second acquisition module is further configured to obtain the position parameter of the focus position of the third virtual camera; based on the position parameter of the focus position and the third virtual camera The position parameter of the camera determines the direction vector of the third virtual camera; based on the direction vector of the third virtual camera, the attitude angle of the third virtual camera is determined.
  • the above-mentioned second acquisition module is also configured to obtain a virtual distance when a virtual object exists within the field of view of the third virtual camera.
  • the virtual distance is the sum of the first position and the field of view.
  • the distance between the second positions, the first position is the position of the third virtual camera in the world coordinate system, and the second position is the position of the virtual object in the world coordinate system; based on the virtual distance, the field of view of the third virtual camera is determined , where the numerical value of the virtual distance is proportional to the numerical value of the field of view angle.
  • the parameter processing device of the above-mentioned virtual camera further includes: a jitter module configured to respond to the jitter parameter addition instruction for the adjusted fourth virtual camera, in the adjusted camera parameters of the fourth virtual camera. Add a shaking parameter so that the image rendered by the fourth virtual camera with the added shaking parameter produces a shaking effect.
  • Embodiments of the present application provide a computer program product.
  • the computer program product includes a computer program or computer-executable instructions.
  • the computer program or computer-executable instructions are stored in a computer-readable storage medium.
  • the processor of the electronic device reads the computer-executable instructions from the computer-readable storage medium, and the processor executes the computer-executable instructions, so that the electronic device executes the virtual camera parameter processing method described in the embodiment of the present application.
  • Embodiments of the present application provide a computer-readable storage medium storing computer-executable instructions.
  • the computer-executable instructions are stored therein.
  • the computer-executable instructions When executed by a processor, they will cause the processor to execute the steps provided by the embodiments of the present application.
  • a parameter processing method for a virtual camera for example, a parameter processing method for a virtual camera as shown in FIG. 4 .
  • the computer-readable storage medium may be Read-Only Memory (ROM), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (Erasable Programmable Read-Only Memory). , EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory, magnetic surface memory, optical disk, or CD-ROM; it can also include one or any combination of the above memories of various electronic devices.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • Erasable Programmable Read-Only Memory Erasable Programmable Read-Only Memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • flash memory magnetic surface memory, optical disk, or CD-ROM; it can also include one or any combination of the above memories of various electronic devices.
  • computer-executable instructions may take the form of a program, software, software module, script, or code, written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and It may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • computer-executable instructions may, but do not necessarily correspond to, files in a file system and may be stored as part of a file holding other programs or data, for example, in Hyper Text Markup Language (HTML).
  • HTML Hyper Text Markup Language
  • scripts in the document stored in a single file specific to the program in question, or, stored in multiple collaborative files (for example, a file storing one or more modules, subroutines, or portions of code) .
  • computer-executable instructions may be deployed to execute on one electronic device, or on multiple electronic devices located at one location, or on multiple electronic devices distributed across multiple locations and interconnected by a communications network. executed on the device.
  • (1) Obtain the camera parameters of the first virtual camera bound to the physical camera, smooth the camera parameters of the first virtual camera to obtain the target camera parameters, and based on the target camera parameters, calculate the camera parameters of the second virtual camera Make adjustments to obtain the adjusted second virtual camera.
  • the image data collected by the physical camera is rendered to obtain an image of the virtual scene including the object; in the process of rendering the picture that combines virtual and real, due to
  • the first virtual camera has a binding relationship with the physical camera in the real scene, so that the first virtual camera and the physical camera have the same camera parameters, so that the smoothing of the camera parameters of the first virtual camera is equivalent to the smoothing of the physical camera.
  • the camera parameters are smoothed, and the target camera parameters obtained by smoothing are transferred to the second virtual camera by configuring a second virtual camera corresponding to the focus of the first virtual camera.
  • the physical camera in the real scene is Without the assistance of a hardware stabilizer, even if the physical camera shakes, the camera parameters of the second virtual camera can remain stable, effectively improving the stability performance of the virtual camera and saving the hardware of installing a hardware stabilizer on the physical camera. cost, thereby significantly reducing hardware costs.
  • the target position of the object in the world coordinate system is obtained, thereby accurately determining the target position of the object in the world coordinate system, which facilitates subsequent determination of the first virtual camera based on the target position.
  • the attitude angle effectively improves the accuracy of the determined attitude angle of the first virtual camera.
  • the position of each object in the world coordinate system can be determined separately, and the position that is less than the distance threshold from the position of each object in the world coordinate system can be determined as the target. position, thereby accurately determining the target position, which facilitates subsequent determination of the attitude angle of the first virtual camera based on the target position, effectively improving the accuracy of the determined attitude angle of the first virtual camera.
  • the second virtual camera can always follow the shooting direction of the first virtual camera to shoot, so that It is realized to configure a second virtual camera with the same shooting function and the same perspective relationship as the first virtual camera in the virtual scene, and replace the first virtual camera to render an image of the virtual scene including the object based on the image data.
  • the fourth virtual camera By configuring a fourth virtual camera whose distance from the focus of the third virtual camera is less than the focus distance threshold in the virtual scene, the fourth virtual camera can always follow the shooting direction of the third virtual camera to shoot, thus It is realized to configure a fourth virtual camera with the same shooting function and the same perspective relationship as the third virtual camera in the virtual scene, and replace the third virtual camera to render an image of the virtual scene including the object based on the image data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请提供了一种虚拟相机的参数处理方法、装置、电子设备、存储介质及程序产品;方法包括:获取虚拟场景中第一虚拟相机的相机参数;第一虚拟相机与真实场景中的物理相机具有绑定关系,物理相机,用于对真实场景中的对象进行图像数据采集,得到对象的图像数据;对第一虚拟相机的相机参数进行平滑处理,得到目标相机参数;基于目标相机参数,对第二虚拟相机的相机参数进行调整,得到调整后的第二虚拟相机,第二虚拟相机的焦点与第一虚拟相机的焦点相对应;其中,调整后的第二虚拟相机,用于基于图像数据,渲染得到包括该对象的虚拟场景的图像。

Description

虚拟相机的参数处理方法、装置、设备、存储介质及程序产品
相关申请的交叉引用
本申请基于申请号为202211095639.3、申请日为2022年09月05日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及计算机技术领域,尤其涉及一种虚拟相机的参数处理方法、装置、电子设备、计算机可读存储介质及计算机程序产品。
背景技术
虚实融合技术是一种将虚拟场景和真实场景进行巧妙融合的技术,广泛运用于多媒体视频制作、三维建模、线上会议、实时注册、智能交互以及传感等诸多技术领域,虚实融合主要体现在虚实结合、实时交互和三维交互匹配等方面,主要通过显示技术、交互技术、传感技术和计算机图像和图像技术实现。
在相关技术的虚拟融合场景中,通常通过硬件稳定器(例如,相机云台),实现相机的防抖动,硬件稳定器的造价极高,因此,相关技术中的防抖动的硬件成本极高,且稳定性能不佳。
发明内容
本申请实施例提供一种虚拟相机的参数处理方法、装置、电子设备、计算机可读存储介质及计算机程序产品,能够有效提高虚拟融合场景中虚拟相机的稳定性能。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种虚拟相机的参数处理方法,包括:
获取第一虚拟相机的相机参数,所述第一虚拟相机与真实场景中的物理相机具有绑定关系,所述物理相机,用于对所述真实场景中的对象进行图像数据采集,得到所述对象的图像数据;
对所述第一虚拟相机的相机参数进行平滑处理,得到目标相机参数;
基于所述目标相机参数,对虚拟场景中的第二虚拟相机的相机参数进行调整,得到调整后的第二虚拟相机,所述第二虚拟相机的焦点与所述第一虚拟相机的焦点相对应;
其中,所述调整后的第二虚拟相机,用于基于所述图像数据渲染得到包括所述对象的虚拟场景的图像。
本申请实施例提供一种虚拟相机的参数处理装置,包括:
获取模块,配置为获取虚拟场景中第一虚拟相机的相机参数,所述第一虚拟相机与真实场景中的物理相机具有绑定关系,所述物理相机,用于对所述真实场景中的对象进行图像数据采集,得到所述对象的图像数据;
平滑模块,配置为对所述第一虚拟相机的相机参数进行平滑处理,得到目标相机参数;
调整模块,配置为基于所述目标相机参数,对虚拟场景中的第二虚拟相机的相机参数进行调整,得到调整后的第二虚拟相机,所述第二虚拟相机的焦点与所述第一虚拟相机的焦点相对应;
其中,所述调整后的第二虚拟相机,用于基于所述图像数据渲染得到包括所述对象的虚拟场景的图像。
本申请实施例提供一种电子设备,包括:
存储器,配置为存储计算机可执行指令或者计算机程序;
处理器,配置为执行所述存储器中存储的计算机可执行指令或者计算机程序时,实现本申请实施例提供的虚拟相机的参数处理方法。
本申请实施例提供一种计算机可读存储介质,存储有计算机可执行指令,用于引起处理器执行时,实现本申请实施例提供的虚拟相机的参数处理方法。
本申请实施例提供了一种计算机程序产品,该计算机程序产品包括计算机程序或计算机可执行指令,该计算机程序或计算机可执行指令存储在计算机可读存储介质中。电子设备的处理器从计算机可读存储介质读取该计算机可执行指令,处理器执行该计算机可执行指令,使得该电子设备执行本申请实施例上述的虚拟相机的参数处理方法。
本申请实施例具有以下有益效果:
由于第一虚拟相机与真实场景中的物理相机具有绑定关系,使得第一虚拟相机与该物理相机具有相同的相机参数,进而使得对第一虚拟相机的相机参数的平滑,相当于对物理相机的相机参数进行了平滑,由于第一虚拟相机的焦点与第二虚拟相机的焦点相对应,因此,在基于平滑所得到的目标相机参数,对第二虚拟相机的相机参数进行调整侯,使得平滑处理所得到的目标相机参数转移到第二虚拟相机,如此,使得真实场景中的物理相机无需硬件稳定器的辅助,即使物理相机发生抖动,第二虚拟相机的相机参数也可保持稳定,有效提升了虚拟相机的稳定性能,节省了在物理相机上加装硬件稳定器的硬件成本,从而显著降低了硬件成本。
附图说明
图1A至图1B是相关技术的效果示意图;
图2是本申请实施例提供的虚拟相机的参数处理系统架构的结构示意图;
图3是本申请实施例提供的电子设备500的结构示意图;
图4是本申请实施例提供的虚拟相机的参数处理方法的流程示意图;
图5是本申请实施例提供的虚拟相机的参数处理方法的原理示意图;
图6至图7是本申请实施例提供的虚拟相机的参数处理方法的流程示意图;
图8是本申请实施例提供的虚拟相机的参数处理方法的原理示意图;
图9至图11是本申请实施例提供的虚拟相机的参数处理方法的流程示意图;
图12至图14是本申请实施例提供的虚拟相机的参数处理方法的原理示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
在以下的描述中,所涉及的术语“第一\第二\第三”仅仅是是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
对本申请实施例进行详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释。
1)虚拟场景:是应用程序在电子设备上运行时显示(或提供)的场景。虚拟场景可以 是对真实世界的仿真环境,也可以是半仿真半虚构的虚拟环境,还可以是纯虚构的虚拟环境。虚拟场景可以是二维虚拟场景、2.5维虚拟场景或者三维虚拟场景中的任意一种,本申请实施例对虚拟场景的维度不加以限定。例如,虚拟场景可以包括天空、陆地、海洋等,该陆地可以包括沙漠、城市等环境元素,用户可以控制虚拟对象在该虚拟场景中进行移动。
2)虚拟对象:虚拟场景中可以进行交互的各种人和物的形象,或在虚拟场景中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物等,例如在虚拟场景中显示的人物、动物等。该虚拟对象可以是虚拟场景中的一个虚拟的用于代表用户的虚拟形象。虚拟场景中可以包括多个虚拟对象,每个虚拟对象在虚拟场景中具有自身的形状和体积,占据虚拟场景中的一部分空间。
3)虚实融合:虚实融合技术是一种将虚拟场景和真实场景进行巧妙融合的技术,广泛运用于多媒体视频制作、三维建模、线上会议、实时注册、智能交互以及传感等诸多技术领域,虚实融合主要体现在虚实结合、实时交互和三维交互匹配等方面,主要通过显示技术、交互技术、传感技术和计算机图像和图像技术实现。
4)虚拟相机:是计算机动画软件或虚拟引擎中架设的“摄影机”,用于虚拟拍摄,该虚拟拍摄是通过计算机生成的虚拟场景来进行的拍摄,通过设置虚拟摄像机的位置、角度等相机参数,使得虚拟相机能够模拟真实拍摄中的摄像机操作,虚拟相机在动画制时对于表现视点的作用,相当于传统意义上的摄影机,虚拟相机与拍摄的物理相机的拍摄对象完全不同,物理相机所拍摄的是实景人物或实际搭建好的场景,虚拟相机拍摄的是建立在三维软件中的模型,虚拟相机在虚拟引擎中通过图标的形式呈现,同样具有镜头、焦距、焦点、光圈、景深这些参数,可以实现“推、拉、摇、移、跟、甩、升、降、综合运动”等相机动作,虚拟相机的相机参数是集成在面板上的按钮或数值输入栏,只需操作者输入参数或拖动鼠标,便可实现对虚拟相机参数的配置;
在实际应用中,当虚拟相机与物理相机绑定,即二者具有绑定关系时,虚拟相机和所绑定的物理相机具有相同的相机参数,可以通过物理相机向游戏引擎(虚拟引擎)发送匹配请求,游戏引擎在虚拟场景中增设与物理相机配对的虚拟相机,物理相机在移动的过程中,实时的向虚拟相机发送物理相机的相机参数,虚拟相机接收到物理相机的相机参数后,基于接收到的物理相机的相机参数进行实时同步,以保证第一虚拟相机和所绑定的物理相机具有相同的相机参数,也即可看成虚拟相机是与其绑定的真实场景中物理相机的数字孪生。
5)虚拟引擎:虚拟引擎是指一些已经编写好的可编辑计算机虚拟系统或者一些交互式实时图像应用程序的核心组件,这些系统为虚拟场景的设计者提供编写虚拟场景所需的各种工具,其目的在于让设计者能够容易和快速地编写出程序,虚拟引擎包括渲染引擎(渲染引擎包括二维渲染引擎和三维渲染引擎)、物理引擎、碰撞检测引擎、音效引擎、脚本引擎、动画引擎、人工智能引擎、网络引擎和场景管理引擎等。
6)相机参数:相机参数包括姿态角参数、视场角参数和位置参数中的至少之一,其中,姿态角参数是按欧拉概念定义的,故又称为欧拉角,姿态角包括滚转角、俯仰角和航向角,不同的转动顺序会形成不同的坐标变换矩阵,通常按航向角、俯仰角和滚转角的顺序来表示相机坐标系相对于对象坐标系的空间转动。视场角参数的数值大小与虚拟相机的视野范围正相关,位置参数表征相机的三维位置坐标。
7)元宇宙(Meta Verse):是利用科技手段进行链接与创造的,将现实世界映射为交互的虚拟世界,具备新型社会体系的数字生活空间。元宇宙是整合多种新技术而产生的新型虚实相融的互联网应用和社会形态,它基于扩展现实技术提供沉浸式体验,以及数字孪生技术生成现实世界的镜像,通过区块链技术搭建经济体系,将虚拟世界与现实世界在社交系统、身份系统等系统上密切融合,并且允许每个用户进行内容生产和编辑。
8)响应于:用于表示所执行的操作所依赖的条件或者状态,当满足所依赖的条件或状态时,所执行的一个或多个操作可以是实时的,也可以具有设定的延迟;在没有特别说明的情况下,所执行的多个操作不存在执行先后顺序的限制。
在本申请实施例的实施过程中,申请人发现相关技术存在以下问题:
参见图1A和图1B,图1A和图1B是相关技术的效果示意图,在相关技术中,通常通过硬件稳定器(例如,图1A所释出的硬件稳定器40,图1B所示出的硬件稳定器41)实现相机的防抖动,硬件稳定器可以采用额外加装相机配件的方式,实现对象自动跟拍等功能,但是硬件稳定器的造价会随着相机造价的提高而提高,因此,防抖动成本极高。
在相关技术中,还可以通过使用特征点检测算法,检测每帧图像的特征点,对相同特征点的运动轨迹做平滑处理的方式,得到平滑结果,基于平滑结果进行放大、缩小、旋转、裁切或扭曲每帧画面,得到防抖处理后的视频。但是,相关技术中的上述防抖算法,往往会占用大量的计算资源,且防抖效果不佳。
本申请实施例提供的虚拟相机的参数处理方法,较之于上述相关技术,能够获得与硬件稳定器相同的高质量稳定效果,且复杂度极低,实时运行且不占用计算资源。且无需复杂的安装调试操作,只需连接相机画面即可马上使用,且无需安装调试,易用性极高、门槛极低,能够完全替代掉实体稳定器,从而节省成本。
本申请实施例提供一种虚拟相机的参数处理方法、装置、电子设备、计算机可读存储介质及计算机程序产品,能够在显著降低硬件成本的同时,有效提高虚拟相机的稳定性能,同时有效提高虚拟相机的防抖动效果,下面说明本申请实施例提供的虚拟相机的参数处理系统的示例性应用。
参见图2,图2是本申请实施例提供的虚拟相机的参数处理系统100的架构示意图,终端(示例性示出了终端400)通过网络300连接服务器200,网络300可以是广域网或者局域网,又或者是二者的组合。
终端400上可以设置客户端410,配置为在图形界面410-1(示例性示出了图形界面410-1)显示虚拟场景的图像。例如,在图形界面410-1中显示网络游戏应用程序(Application,APP)中的虚拟场景的图像。再例如,终端400运行网络会议应用APP,在图形界面410-1中显示虚拟场景的图像。再例如,终端400运行视频APP,在图形界面410-1中显示虚拟场景的图像。终端400和服务器200通过有线或者无线网络相互连接。
在一些实施例中,服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络(Content Delivery Network,CDN)、以及大数据和人工智能平台等基础云计算服务的云服务器。终端400可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、智能电视、智能手表、车载终端等,但并不局限于此。本申请实施例提供的电子设备可以实施为终端,也可以实施为服务器。终端以及服务器可以通过有线或无线通信方式进行直接或间接地连接,本申请实施例中不做限制。
在一些实施例中,本申请实施例提供的虚拟相机的参数处理方法还可应用于与现实增强相关的虚拟场景的图像的显示场景中。
例如,利用终端400(如虚拟穿戴终端)进行虚拟场景的图像的显示,终端400在图形界面410-1中显示可供用户选择的虚拟相机的图标,响应于用户针对图形界面410-1中显示的图标的选择操作,确定所选择的虚拟相机;终端400通过网络300向服务器200发送所选择的虚拟相机,服务器200对所选择的虚拟相机的相机参数进行平滑处理,得到目标相机参数,通过配置有目标相机参数的虚拟相机,渲染得到虚拟场景的图像,服务器200向终端400发送渲染得到的虚拟场景的图像,终端400在图形界面410-1中显示渲染得到的虚拟场景的图像。
在一些实施例中,终端400在图形界面410-1中显示可供用户选择的虚拟相机的图标,响应于用户针对图形界面410-1中所显示的图标的选择操作,获取所选择的虚拟相机的相机参数,对所选择的虚拟相机的相机参数进行平滑处理,得到目标相机参数,通过配置有目标相机参数的虚拟相机,渲染得到虚拟场景的图像,并在图形界面410-1中显示渲染得到的虚拟场景的图像。
在另一些实施例中,终端400在图形界面410-1中显示可供用户选择的虚拟相机的图标, 响应于用户针对图形界面410-1中所显示的图标的选择操作,终端400通过网络300向服务器200发送所选择的虚拟相机的相机参数,服务器200接收所选择的虚拟相机的相机参数,并将平滑处理后的目标相机参数发送至终端400,终端400接收平滑处理后的目标相机参数,通过配置有目标相机参数的虚拟相机,渲染得到虚拟场景的图像,终端400在图形界面410-1中显示虚拟场景的图像。
在另一些实施例中,本申请实施例可以借助于云技术(Cloud Technology)实现,云技术是指在广域网或局域网内将硬件、软件、网络等系列资源统一起来,实现数据的计算、储存、处理和共享的一种托管技术。
参见图3,图3是本申请实施例提供的用于进行虚拟相机的参数处理的电子设备500的结构示意图,其中,图3所示出的电子设备500可以是图2中的服务器200或者终端400,图3所示的电子设备500包括:至少一个处理器410、存储器450、至少一个网络接口420。电子设备500中的各个组件通过总线系统440耦合在一起。可理解,总线系统440用于实现这些组件之间的连接通信。总线系统440除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图3中将各种总线都标为总线系统440。
处理器410可以是一种集成电路芯片,具有信号的处理能力,例如通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其中,通用处理器可以是微处理器或者任何常规的处理器等。
存储器450可以是可移除的,不可移除的或其组合。示例性的硬件设备包括固态存储器,硬盘驱动器,光盘驱动器等。存储器450可选地包括在物理位置上远离处理器410的一个或多个存储设备。
存储器450包括易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。非易失性存储器可以是只读存储器(ROM,Read Only Memory),易失性存储器可以是随机存取存储器(RAM,Random Access Memory)。本申请实施例描述的存储器450旨在包括任意适合类型的存储器。
在一些实施例中,存储器450能够存储数据以支持各种操作,这些数据的示例包括程序、模块和数据结构或者其子集或超集,下面示例性说明。
操作系统451,包括用于处理各种基本系统服务和执行硬件相关任务的系统程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务;
网络通信模块452,用于经由一个或多个(有线或无线)网络接口420到达其他电子设备,示例性的网络接口420包括:蓝牙、无线相容性认证(WiFi,Wireless Fidelity)、和通用串行总线(USB,Universal Serial Bus)等。
在一些实施例中,本申请实施例提供的虚拟相机的参数处理装置可以采用软件方式实现,图3示出了存储在存储器450中的虚拟相机的参数处理装置455,其可以是程序和插件等形式的软件,包括以下软件模块:获取模块4551、平滑模块4552及调整模块4553,这些模块是逻辑上的,因此根据所实现的功能可以进行任意的组合或拆分。将在下文中说明各个模块的功能。
在另一些实施例中,本申请实施例提供的虚拟相机的参数处理装置可以采用硬件方式实现,作为示例,本申请实施例提供的虚拟相机的参数处理装置可以是采用硬件译码处理器形式的处理器,其被编程以执行本申请实施例提供的虚拟相机的参数处理方法,例如,硬件译码处理器形式的处理器可以采用一个或多个应用专用集成电路(ASIC,Application Specific Integrated Circuit)、DSP、可编程逻辑器件(PLD,Programmable Logic Device)、复杂可编程逻辑器件(CPLD,Complex Programmable Logic Device)、现场可编程门阵列(FPGA,Field-Programmable Gate Array)或其他电子元件。
在一些实施例中,终端或服务器可以通过运行计算机程序或计算机可执行指令来实现本申请实施例提供的虚拟相机的参数处理方法。举例来说,计算机程序可以是操作系统中的原生程序(例如,专用的图像去模糊程序)或软件模块,例如,可以嵌入到任意程序(如即时通信客户端、相册程序、电子地图客户端、导航客户端)中的去模糊模块;例如可以是本地 (Native)应用程序(APP,Application),即需要在操作系统中安装才能运行的程序。总而言之,上述计算机程序可以是任意形式的应用程序、模块或插件。
将结合本申请实施例提供的服务器或终端的示例性应用和实施,说明本申请实施例提供的虚拟相机的参数处理方法。
参见图4,图4是本申请实施例提供的虚拟相机的参数处理方法的流程示意图,将结合图4示出的步骤101至步骤104进行说明,本申请实施例提供的虚拟相机的参数处理方法可以由服务器或终端单独实施,或者由服务器及终端协同实施,下面将以服务器单独实施为例进行说明。
在步骤101中,服务器获取虚拟场景中第一虚拟相机的相机参数。
在一些实施例中,相机参数可以包括姿态角、视场角和相机位置中至少之一,第一虚拟相机与真实场景中的物理相机具有绑定关系,物理相机,用于对真实场景中的对象进行图像数据采集,得到对象的图像数据。
在一些实施例中,第一虚拟相机与真实场景中的物理相机具有绑定关系,是指第一虚拟相机和所绑定的物理相机具有相同的相机参数,即,第一虚拟相机的相机参数,随着所绑定的物理相机的相机参数的变化而变化,始终与所绑定的物理相机的相机参数相同;在实际应用中,第一虚拟相机和所绑定的物理相机具有相同的相机参数,可以通过物理相机向游戏引擎发送匹配请求,游戏引擎在虚拟场景中增设与物理相机配对的虚拟相机,物理相机在移动的过程中,实时的向虚拟相机发送物理相机的相机参数,虚拟相机接收到物理相机的相机参数后,基于接收到的物理相机的相机参数进行实时同步,以保证第一虚拟相机和所绑定的物理相机具有相同的相机参。
在实际应用中,本文中所提到的虚拟相机(如第一虚拟相机、第二虚拟相机及第三虚拟相机、)可用于虚拟拍摄,该虚拟拍摄是通过计算机生成的虚拟场景来进行的拍摄,通过设置虚拟摄像机的位置、角度等相机参数,使得虚拟相机能够模拟真实拍摄中的摄像机操作,以本文中的第一虚拟相机为例,由于第一虚拟相机与真实场景中的物理相机具有绑定关系,也即,可看成第一虚拟相机是与其绑定的真实场景中物理相机的数字孪生。
作为示例,参见图5,图5是本申请实施例提供的虚拟相机的参数处理方法的原理示意图。第一虚拟相机可以是与真实场景中的物理相机1具有绑定关系的虚拟相机2。
在一些实施例中,虚拟相机,配置为对虚拟场景中的虚拟对象进行图像数据采集,得到虚拟对象的图像数据。
在一些实施例中,虚拟场景中的虚拟相机的数量可以为至少两个,该至少两个虚拟相机包括:与物理相机绑定的虚拟相机和没有与物理相机绑定的虚拟相机。
在一些实施例中,相机参数包括姿态角、视场角和相机位置,其中,姿态角(Attitude Angle)包括俯仰角、航向角和滚转角,虚拟相机不同的转动顺序会形成不同的坐标变换矩阵,通常按照航向角、俯仰角和滚转角的顺序来表示虚拟相机的坐标系相对于地理坐标系的空间转动。视场角(Field Of View)的大小决定虚拟相机的视野范围,视场角是以虚拟相机的镜头为顶点,以被测目标的物像可通过镜头的最大范围的两条边缘构成的夹角。
在实际应用中,服务器获取虚拟场景中第一虚拟相机的相机参数之前,需要先从虚拟场景中已配置的多个虚拟相机中选择一个作为第一虚拟相机,在一些实施例中,服务器可以通过如下方式选择第一虚拟相机:确定虚拟场景中多个已配置的虚拟相机,其中,各已配置的虚拟相机分别与不同的物理相机绑定;响应于针对多个已配置的虚拟相机的选择操作,将被选中的虚拟相机确定为第一虚拟相机。
这里,在实际应用中,上述虚拟相机的选择操作可以由用户通过客户端所触发,例如,客户端显示可供选择的多个虚拟相机各自的相机图标,用户通过触发相机图标实现触发针对相应的虚拟相机的选择操作,然后客户端将该选择操作的信息发送至服务器,以使服务器对该选择操作进行响应。
在一些实施例中,在虚实融合的应用场景中,通过在真实场景中配置多个真实相机,不同的真实相机对不同的真实场景中的对象进行数据采集,并通过在虚拟场景中配置与每个真 实相机绑定的虚拟相机,从而通过将虚拟相机作为真实场景和虚拟场景的桥梁,实现虚实融合。
如此,通过在虚拟场景中配置与每个真实相机绑定的虚拟相机,并将绑定的虚拟相机作为已配置的虚拟相机,响应于针对多个已配置的虚拟相机的选择操作,将被选中的虚拟相机确定为第一虚拟相机,从而实现根据选择确定需要平滑的虚拟相机的相机参数,在物理相机的配置量较多的情形下,与物理相机绑定的虚拟相机的配置量也会急剧增加,通过针对已配置的虚拟相机的选择操作,确定平滑对象,从而不至于对每个虚拟相机的相机参数进行平滑处理,而是有选择性的选择虚拟相机,针对所选择的虚拟相机的相机参数进行平滑处理,从而有效减少了需进行相机参数平滑处理的虚拟相机的数量,有效较少了计算量,提高了平滑效率。
在一些实施例中,在相机参数包括姿态角时,参见图6,图6是本申请实施例提供的虚拟相机的参数处理方法的流程示意图,图6所示出的步骤101可以通过执行以下步骤1011至步骤1013实现。
在步骤1011中,获取对象在世界坐标系下的目标位置、及第一虚拟相机的位置。
在一些实施例中,世界坐标系是指系统的绝对坐标系,在没有建立用户坐标系之前画面上所有点的坐标都是以该坐标系的原点来确定各自的位置的。
在一些实施例中,物理相机和所绑定的虚拟相机在世界坐标系的位置相同,通过物理相机对真实场景中的对象进行图像数据采集,并将所采集的图像数据发送至虚拟引擎中的与该物理相机绑定的虚拟相机上。
在一些实施例中,真实场景中的对象的数量可以是至少一个,在真实场景中对象的数量为多个时,不同的对象可以通过同一物理相机进行图像数据采集,不同的对象也可以通过不同的物理相机进行图像数据采集,例如,当对象的数量为多个时,多个对象处于不同的区域,每个区域对应一个物理相机,对处于该区域内的对象进行图像数据采集。
在一些实施例中,当与第一虚拟相机绑定的物理相机的视野范围内的对象的数量为一个时,即,当对象的数量为一个时,上述步骤1011中获取对象在世界坐标系下的目标位置可以通过如下方式实现:获取对象在世界坐标系下的多个骨骼点坐标;对多个骨骼点坐标进行加权求和,得到对象在世界坐标系下的目标位置。
在一些实施例中,真实场景中的对象包括多个骨骼点,不同的骨骼点在世界坐标系下的位置不同,骨骼点是对象外部形体的骨骼支撑点,骨骼点处在形体的转折部位,对造型起着至关重要的作用。
在一些实施例中,对象在世界坐标系下的目标位置的表达式可以为:
X=a1X1+a2X2+a3X3+…+anXn       (1)
其中,X表征对象在世界坐标系下的目标位置,a1表征对象的骨骼点1对应的权重,X1表征对象的骨骼点1在世界坐标系下的位置,a2表征对象的骨骼点2对应的权重,X2表征对象的骨骼点2在世界坐标系下的位置,a3表征对象的骨骼点3对应的权重,X3表征对象的骨骼点3在世界坐标系下的位置,an表征对象的骨骼点n对应的权重,Xn表征对象的骨骼点n在世界坐标系下的位置,n表征对象的骨骼点的数量。
在一些实施例中,不同的骨骼点对应的加权求和的权重不同,骨骼点对应的加权求和的权重可以根据实际情况而具体设定,各骨骼点对应的加权求和的权重之和可以等于1。
如此,通过对多个骨骼点坐标进行加权求和,得到对象在世界坐标系下的目标位置,从而准确确定出对象在世界坐标系下的目标位置,便于后续基于目标位置确定第一虚拟相机的姿态角,有效提高了所确定的第一虚拟相机的姿态角的准确性。
在一些实施例中,当与第一虚拟相机绑定的物理相机的视野范围内的对象的数量为至少两个时,即,当对象的数量为至少两个时,上述步骤1011中获取对象在世界坐标系下的目标位置可以通过如下方式实现:针对各对象分别执行以下处理:获取对象在世界坐标系下的 多个骨骼点坐标;对多个骨骼点坐标进行加权求和,得到对象在世界坐标系下的位置;基于各对象在世界坐标系下的位置,确定目标位置,其中,目标位置与各对象在世界坐标系下的位置之间的距离小于距离阈值。
在一些实施例中,距离阈值的设定可以根据实际的应用场景进行设定。
如此,当对象的数量为至少两个时,可以通过分别确定各对象在世界坐标系下的位置,将距离各对象在世界坐标系下的位置之间的距离小于距离阈值的位置确定为目标位置,从而准确的确定出目标位置,便于后续基于目标位置确定第一虚拟相机的姿态角,有效提高了所确定的第一虚拟相机的姿态角的准确性。
在步骤1012中,基于对象在世界坐标系下的目标位置和第一虚拟相机的位置,确定目标方向向量。
在一些实施例中,目标方向向量用于在世界坐标系下,指示第一虚拟相机指向对象的方向。
在一些实施例中,目标方向向量是在世界坐标系下,以第一虚拟相机的位置为起点,以对象在世界坐标系下的目标位置为终点的向量。
在步骤1013中,基于目标方向向量,确定第一虚拟相机的姿态角。
在一些实施例中,目标方向向量的表达式可以为:
V=(x,y,z)    (2)
其中,V表征目标方向向量,x表征目标方向向量的横轴分量,y表征目标方向向量的纵轴分量,z表征目标方向向量的竖轴分量。
在一些实施例中,上述姿态角包括俯仰角和航向角;上述步骤1013可以通过如下方式实现:将目标方向向量的竖轴分量的余弦值,确定为第一虚拟相机的俯仰角,竖轴分量是目标方向向量在世界坐标系的竖轴上的分量;将目标方向向量的纵轴分量和横轴分量的比值,确定为参考比值,纵轴分量是目标方向向量在世界坐标系的纵轴上的分量,横轴分量是目标方向向量在世界坐标系的横轴上的分量;将参考比值的正切值,确定为第一虚拟相机的航向角。
在一些实施例中,第一虚拟相机的姿态角包括滚转角,滚转角的大小可以为0。
在一些实施例中,第一虚拟相机的俯仰角的表达式可以为:
Q=b1sin(z)    (3)
其中,b1表征俯仰角系数,z表征目标方向向量的竖轴分量,Q表征第一虚拟相机的俯仰角。
在一些实施例中,第一虚拟相机的航向角的表达式可以为:
W=b2tan2(y,x)      (4)
其中,W表征第一虚拟相机的航向角,b2表征航向角系数,x表征目标方向向量的横轴分量,y表征目标方向向量的纵轴分量。
如此,通过基于目标方向向量,准确的确定第一虚拟相机的俯仰角和航向角,从而便于后续基于准确的俯仰角和航向角进行平滑处理,有效提高了确定俯仰角和航向角的准确性。
在步骤102中,对第一虚拟相机的相机参数进行平滑处理,得到目标相机参数。
在一些实施例中,第一虚拟相机的相机参数包括姿态角、视场角和相机位置中至少之一,姿态角包括俯仰角、航向角和滚转角。上述步骤102可以通过执行如下处理至少之一实现:对第一虚拟相机的姿态角进行平滑处理,得到目标姿态角;对第一虚拟相机的视场角进行平滑处理,得到目标视场角;对第一虚拟相机的相机位置进行平滑处理,得到目标相机位置。
在一些实施例中,上述对第一虚拟相机的姿态角进行平滑处理,得到目标姿态角参数可以通过如下方式实现:对第一虚拟相机的俯仰角进行平滑处理,得到目标俯仰角;对第一虚拟相机的航向角进行平滑处理,得到目标航向角;对第一虚拟相机的滚转角进行平滑处理, 得到目标滚转角。
在一些实施例中,平滑处理是指将平滑时刻相邻的两个待平滑参数之间的差距缩小,以达到参数平滑的作用的处理方式,以待平滑参数为姿态角为例,第一虚拟相机存在多个平滑时刻,每个平滑时刻对应一个第一虚拟相机的姿态角,将任意相邻的平滑时刻的姿态角间的差距缩小至姿态角差值阈值,以实现对第一虚拟相机的姿态角的平滑处理。
在一些实施例中,第一虚拟相机存在n个平滑时刻,n为大于1的正整数。参见图6,图6是本申请实施例提供的虚拟相机的参数处理方法的流程示意图,图6所示出的步骤102可以通过执行以下步骤1021至步骤1022实现。
在步骤1021中,当第一虚拟相机的相机参数包括第n平滑时刻的相机参数时,获取平滑指数、及第n-1目标相机参数。
在一些实施例中,第n平滑时刻的相机参数为,在第n平滑时刻,对第一虚拟相机的相机参数进行平滑处理之前的相机参数。平滑指数,用于指示相机参数的平滑程度。第n-1目标相机参数为,在第n-1平滑时刻,对第一虚拟相机的相机参数进行平滑处理所得到的目标相机参数。作为示例,当n=2时,获取平滑指数、以及第1目标相机参数,其中,第1目标相机参数为,在第1平滑时刻,对第一虚拟相机的相机参数进行平滑处理所得到的目标相机参数。
作为示例,当n=3时,获取平滑指数、以及第2目标相机参数,其中,第2目标相机参数为,在第2平滑时刻,对第一虚拟相机的相机参数进行平滑处理所得到的目标相机参数;作为示例,当n=3时,获取平滑指数、以及第2目标相机参数,其中,第2目标相机参数为,在第2平滑时刻,对第一虚拟相机的相机参数进行平滑处理所得到的目标相机参数。
在步骤1022中,基于平滑指数及第n-1目标相机参数,对第n平滑时刻的相机参数进行平滑处理,得到第n目标相机参数,并将第n目标相机参数作为目标相机参数。
在一些实施例中,平滑指数介于0至1之间,平滑指数,用于指示相机参数的平滑程度,平滑指数越高,相应的相机参数的平滑程度越高,平滑指数的设定,可以根据不同的应用场景而设定。
在一些实施例中,平滑指数介于0至1之间,上述步骤1022可以通过如下方式实现:将第n平滑时刻的相机参数和平滑指数的乘积,确定为第一参考参数;将第n-1目标相机参数和补充平滑指数的乘积,确定为第二参考参数,其中,补充平滑指数是平滑指数与1的差值;将第一参考参数和第二参考参数相加,得到第n目标相机参数,并将第n目标相机参数作为目标相机参数。在一些实施例中,第一参考参数的表达式可以为:
T1=k1βn     (5)
其中,T1表征第一参考参数,k1表征平滑指数,β1表征第n平滑时刻的相机参数。
在一些实施例中,第二参考参数的表达式可以为:
T2=(1-k1n-1    (6)
其中,T2表征第二参考参数,k1表征平滑指数,αn-1表征第n-1目标相机参数,1-k1表征补充平滑指数。
在一些实施例中,第n目标相机参数的表达式可以为:
αn=T1+T2=k1βn+(1-k1n-1      (7)
其中,αn表征第n目标相机参数,T1表征第一参考参数,T2表征第二参考参数。
如此,通过对第一虚拟相机的n个平滑时刻中的每个平滑时刻进行平滑,从而实现了对第一虚拟相机的每个平滑时刻的相机参数的平滑,使得任意两个相邻的平滑时刻的相机参数之间的差距不会发生突变,从而实现了相机参数的平滑。
在一些实施例中,当相机参数包括姿态角时,姿态角包括俯仰角、航向角和滚转角。在上述步骤102之前,还可以通过如下方式对目标角进行锁定:响应于针对姿态角中目标角的锁定指令,对目标角进行锁定;其中,目标角包括俯仰角、航向角和滚转角中至少之一。作 为示例,响应于针对姿态角中俯仰角和航向角的锁定指令,对俯仰角和航向角进行锁定。作为示例,响应于针对姿态角中俯仰角的锁定指令,对俯仰角进行锁定。作为示例,响应于针对姿态角中俯仰角和滚转角的锁定指令,对俯仰角和滚转角进行锁定。在一些实施例中,对目标角进行锁定,用于保持目标角不会被平滑处理。
在一些实施例中,上述步骤102还可以通过如下方式实现:对姿态角中除目标角以外的部分进行平滑处理,得到目标相机参数。如此,通过对姿态角中除目标角以外的部分进行平滑处理,从而实现了不同的姿态角的逐步平滑,或者部分平滑,从而保证了姿态角平滑处理的可控性,满足各种不同应用场景下的平滑需求,同时,可以实现渐进式平滑,有限减少了平滑过程中的出错率,提高了平滑的准确性。
在一些实施例中,当相机参数包括姿态角时,参见图7,图7是本申请实施例提供的虚拟相机的参数处理方法的流程示意图,图7所示出的步骤102可以通过执行以下步骤1023至步骤1025实现。
在步骤1023中,获取姿态角的数据类型,其中,数据类型包括四元数类型和欧拉角类型。
在一些实施例中,欧拉角类型的姿态角,用于欧拉角是用于,确定虚拟相机位置的三个一组独立角参量,由章动角、进动角和自转角组成。
在一些实施例中,四元数类型的姿态角,其中,四元数是复数的不可交换延伸,如把四元数的集合考虑成多维实数空间的话,四元数就代表着一个四维空间,相对于复数为二维空间。作为示例,四元数类型的姿态角的表达式可以为:
ai+bj+ck+d      (8)
其中,a、b、c、d表征四元数中的各元素,i、j、k表征四元数中的虚数单位。
在步骤1024中,当数据类型为四元数类型时,对四元数类型的姿态角中的每个元素进行平滑处理,得到四元数类型的参考姿态角。
作为示例,参见图8,图8是本申请实施例提供的虚拟相机的参数处理方法的原理示意图。当数据类型为四元数类型时,对四元数类型的姿态角中的每个(逐个)元素进行平滑处理,得到四元数类型的参考姿态角。作为示例,当数据类型为四元数类型时,对四元数类型的姿态角ai+bj+ck+d中的每个元素(a、b、c、d)进行平滑处理,得到四元数类型的参考姿态角。
在步骤1025中,将四元数类型的参考姿态角进行数据类型转换,得到欧拉角类型的参考姿态角,并将欧拉角类型的参考姿态角,确定为目标姿态角。
作为示例,参见图8,将四元数类型的参考姿态角进行数据类型转换,得到欧拉角类型的参考姿态角,并将欧拉角类型的参考姿态角,确定为目标姿态角。如此,通过基于姿态角的数据类型,对不同类型的姿态角进行转换、平滑处理,从而有效提高了姿态角平滑处理的普适性。
在步骤103中,基于目标相机参数,对虚拟场景中的第二虚拟相机的相机参数进行调整,得到调整后的第二虚拟相机。
这里,第二虚拟相机的焦点与所述第一虚拟相机的焦点相对应。在实际应用中,对第二虚拟相机的相机参数进行调整之前,需要先在虚拟场景中配置第二虚拟相机。
在一些实施例中,第二虚拟相机的焦点与第一虚拟相机的焦点相对应的含义可以是第二虚拟相机的焦点与第一虚拟相机的焦点之间的距离小于焦点距离阈值(可依据实际需要进行设定),或者第二虚拟相机的焦点与所述第一虚拟相机的焦点重合。例如,当第二虚拟相机的焦点与第一虚拟相机的焦点之间的距离为0时,第二虚拟相机的焦点与第一虚拟相机的焦点的位置重合。
在一些实施例中,第二虚拟相机需要和第一虚拟相机保持相同的透视关系,通过将第二虚拟相机的焦点设置为与第一虚拟相机的焦点之间的距离小于焦点距离阈值,即,第一虚拟相机的焦点位置和第二虚拟相机的焦点位置较近,从而保证了第二虚拟相机和第一虚拟相机 保持相同的透视关系。
如此,通过在虚拟场景中配置与第一虚拟相机的焦点之间的距离小于焦点距离阈值的第二虚拟相机,从而实现了第二虚拟相机始终跟随第一虚拟相机的拍摄方向进行拍摄,从而实现了在虚拟场景中配置与第一虚拟相机具有相同拍摄功能,且透视关系相同的第二虚拟相机,从而实现采用第二虚拟相机替代第一虚拟相机进行图像渲染,即调整后的第二虚拟相机基于图像数据,渲染得到包括对象的虚拟场景的图像。
在一些实施例中,调整后的第二虚拟相机的相机参数为目标相机参数,也即对第二虚拟相机的相机参数进行调整包括:将第二虚拟相机的相机参数设置为目标相机参数。
在一些实施例中,目标相机参数包括目标姿态角、目标视场角和目标位置中至少之一。对虚拟场景中的第二虚拟相机的相机参数进行调整,可以通过如下方式实现:基于目标姿态角、目标视场角和目标位置中至少之一,对第二虚拟相机的相机参数进行调整,得到调整后的第二虚拟相机。
在一些实施例中,对虚拟场景中的第二虚拟相机的相机参数进行调整可以通过如下方式实现:将第二虚拟相机当前的相机参数,调整为目标相机参数,得到调整后的第二虚拟相机。
在一些实施例中,上述将第二虚拟相机当前的相机参数,调整为目标相机参数,得到调整后的第二虚拟相机可以通过如下方式实现:基于第n目标相机参数,将第二虚拟相机的第n-1平滑时刻的相机参数,调整为第n目标相机参数,得到调整后的第二虚拟相机。
在一些实施例中,在调整为目标相机参数之后,还可以执行以下处理:响应于针对目标相机参数的调整指令,对目标相机参数进行调整,以得到调整后的第二虚拟相机;也即,在将第二虚拟相机的相机参数调整为目标相机参数之后,用户可通过触发调整指令,对第二虚拟相机的目标相机参数进行调整,例如,当目标相机参数包括目标视场角时,用户可通过触发调整指令,对第二虚拟相机的目标视场角的大小进行调整。
在一些实施例中,调整后的第二虚拟相机,用于基于图像数据渲染得到包括对象的虚拟场景的图像。
如此,通过获取与物理相机绑定的第一虚拟相机的相机参数,对第一虚拟相机的相机参数进行平滑处理,得到目标相机参数,并基于目标相机参数,对第二虚拟相机的相机参数进行调整,得到调整后的第二虚拟相机,通过调整后的第二虚拟相机,对物理相机所采集的图像数据进行渲染得到包括对象的虚拟场景的图像;在虚实结合的画面渲染过程中,由于第一虚拟相机与真实场景中的物理相机具有绑定关系,使得第一虚拟相机与该物理相机具有相同的相机参数,进而使得对第一虚拟相机的相机参数的平滑,相当于对物理相机的相机参数进行了平滑,更通过配置和第一虚拟相机的焦点相对应的第二虚拟相机,实现平滑处理所得到的目标相机参数到第二虚拟相机的转移,如此,使得真实场景中的物理相机无需硬件稳定器的辅助,即使物理相机处发生抖动,第二虚拟相机处的相机参数也可保持稳定,有效提升了虚拟相机的稳定性能,且节省了在物理相机上加装硬件稳定器的硬件成本,从而显著降低了硬件成本。
在一些实施例中,参见图9,图9是本申请实施例提供的虚拟相机的参数处理方法的流程示意图,将结合图9示出的步骤201至步骤204进行说明,下述步骤201至步骤204的执行主体可以是服务器或终端,或者由服务器及终端协同实施,下面将以执行主体为服务器为例进行说明。
在步骤201中,获取第三虚拟相机的相机参数,第三虚拟相机与真实场景中的物理相机不具有绑定关系。
在一些实施例中,第三虚拟相机的相机参数包括姿态角、视场角和相机位置中至少之一,第一虚拟相机与真实场景中的物理相机不具有绑定关系,物理相机,用于对真实场景中的对象进行图像数据采集,得到对象的图像数据。
在一些实施例中,虚拟相机,用于对虚拟场景中的虚拟对象进行图像数据采集,得到虚拟对象的图像数据。
在一些实施例中,第三虚拟相机的相机参数包括姿态角、视场角和相机位置中至少之一, 姿态角包括俯仰角、航向角和滚转角,虚拟相机不同的转动顺序会形成不同的坐标变换矩阵,通常按照航向角、俯仰角和滚转角的顺序来表示虚拟相机的坐标系相对于地理坐标系的空间转动。视场角的大小决定虚拟相机的视野范围,视场角是以虚拟相机的镜头为顶点,以被测目标的物像可通过镜头的最大范围的两条边缘构成的夹角。
在一些实施例中,在上述步骤201之前,还可以通过如下方式选择第三虚拟相机:确定虚拟场景中多个已配置的第三虚拟相机;响应于针对多个已配置的第三虚拟相机的选择操作,将被选中的第三虚拟相机确定为上述第三虚拟相机。
在实际应用中,上述第三虚拟相机的选择操作可以由用户通过客户端所触发,例如,客户端显示可供选择的多个虚拟相机各自的相机图标,用户通过触发相机图标实现触发针对相应的虚拟相机的选择操作,然后客户端将该选择操作的信息发送至服务器,以使服务器对该选择操作进行响应,从而将用户选择的虚拟相机确定为第三虚拟相机。
在一些实施例中,当第三虚拟相机的相机参数包括姿态角时,参见图10,图10是本申请实施例提供的虚拟相机的参数处理方法的流程示意图,图10所示出的步骤201可以通过执行以下步骤2011至步骤2013实现。
在步骤2011中,获取第三虚拟相机的焦点位置的位置参数。
在一些实施例中,第三虚拟相机的焦点位置的位置参数为第三虚拟相机在世界坐标系下的位置坐标。
在一些实施例中,世界坐标系是指系统的绝对坐标系,在没有建立用户坐标系之前画面上所有点的坐标都是以该坐标系的原点来确定各自的位置的。
在步骤2012中,基于焦点位置的位置参数和第三虚拟相机的位置参数,确定第三虚拟相机的方向向量。
在一些实施例中,第三虚拟相机的方向向量用于在世界坐标系下,指示第三虚拟相机所指向的方向。
在一些实施例中,第三虚拟相机的方向向量是在世界坐标系下,以第三虚拟相机的位置为起点,以第三虚拟相机的焦点位置终点的向量。
在步骤2013中,基于第三虚拟相机的方向向量,确定第三虚拟相机的姿态角。
在一些实施例中,第三虚拟相机的方向向量的表达式可以为:
T=(x1,y1,z1)      (9)
其中,T表征第三虚拟相机的方向向量,x1表征第三虚拟相机的方向向量的横轴分量,y1表征第三虚拟相机的方向向量的纵轴分量,z1表征第三虚拟相机的方向向量的竖轴分量。
在一些实施例中,上述姿态角包括俯仰角和航向角;上述步骤2013可以通过如下方式实现:将第三虚拟相机的方向向量的竖轴分量的余弦值,确定为第三虚拟相机的俯仰角,竖轴分量是第三虚拟相机的方向向量在世界坐标系的竖轴上的分量;将第三虚拟相机的方向向量的纵轴分量和横轴分量的比值,确定为参考比值,纵轴分量是目标方向向量在世界坐标系的纵轴上的分量,横轴分量是目标方向向量在世界坐标系的横轴上的分量;将参考比值的正切值,确定为第三虚拟相机的航向角。
在一些实施例中,第三虚拟相机的姿态角包括滚转角,滚转角的大小可以为0。
在一些实施例中,第三虚拟相机的俯仰角的表达式可以为:
Q2=b3sin(z1)    (10)
其中,b3表征俯仰角系数,z1表征第三虚拟相机的方向向量的竖轴分量,Q2表征第三虚拟相机的俯仰角。
在一些实施例中,第三虚拟相机的航向角的表达式可以为:
W2=b4tan2(y1,x1)    (11)
其中,W2表征第三虚拟相机的航向角,b4表征航向角系数,x1表征第三虚拟相机的方 向向量的横轴分量,y1表征第三虚拟相机的方向向量的纵轴分量。
如此,通过基于第三虚拟相机的方向向量,准确的确定第三虚拟相机的俯仰角和航向角,从而便于后续基于准确的俯仰角和航向角进行平滑处理,有效提高了俯仰角和航向角的准确性。
在一些实施例中,当相机参数包括视场角时,参见图11,图11是本申请实施例提供的虚拟相机的参数处理方法的流程示意图,图11所示出的步骤201可以通过执行以下步骤2014至步骤2015实现。
在步骤2014中,当第三虚拟相机的视场范围内存在虚拟对象时,获取虚拟距离,虚拟距离是第一位置与第二位置间的距离,第一位置是第三虚拟相机在世界坐标系下的位置,第二位置是虚拟对象在世界坐标系下的位置。
在一些实施例中,当第三虚拟相机的视场范围内存在虚拟对象时,获取第三虚拟相机在世界坐标系下的位置与虚拟对象在世界坐标系下的位置之间的虚拟距离。
在步骤2015中,基于虚拟距离,确定第三虚拟相机的视场角,其中,虚拟距离的数值大小与视场角的数值大小成正比。
在一些实施例中,通过获取的虚拟距离的数值大小,确定第三虚拟相机的视场角,当虚拟距离减小时,第三虚拟相机的视场角的数值大小随之减少,当虚拟距离增大时,第三虚拟相机的视场角的数值大小随之增大。
如此,通过虚拟距离的数值大小,动态控制第三虚拟相机的视场角,从而使得第三虚拟相机的视场角随着虚拟距离的数值大小变化而相应变化,从而实现了第三虚拟相机的视场角的动态调控,实现了第三虚拟相机的视场角自动的推拉摇移,有效提升了第三虚拟相机的运镜效果。
在步骤202中,对第三虚拟相机的相机参数进行平滑处理,得到第三虚拟相机的目标相机参数。
在一些实施例中,第三虚拟相机的相机参数包括姿态角、视场角和相机位置中至少之一,姿态角包括俯仰角、航向角和滚转角。上述步骤202可以通过执行如下处理至少之一实现:对第三虚拟相机的姿态角进行平滑处理,得到目标姿态角参数;对第三虚拟相机的视场角进行平滑处理,得到目标视场角参数;对第三虚拟相机的相机位置进行平滑处理,得到目标相机位置参数。
在一些实施例中,上述对第三虚拟相机的姿态角进行平滑处理,得到目标姿态角参数可以通过如下方式实现:对第三虚拟相机的俯仰角进行平滑处理,得到目标俯仰角参数;对第三虚拟相机的航向角进行平滑处理,得到目标航向角参数;对第三虚拟相机的滚转角进行平滑处理,得到目标滚转角参数。
在一些实施例中,平滑处理是指将平滑时刻相邻的两个待平滑参数之间的差距缩小,以达到参数平滑的作用的处理方式。
在一些实施例中,第三虚拟相机存在n个平滑时刻,n为大于1的正整数。上述步骤202可以通过以下方式实现:当第三虚拟相机的相机参数包括第n平滑时刻的相机参数时,获取平滑指数、及第n-1目标相机参数。基于平滑指数及第n-1目标相机参数,对第n平滑时刻的相机参数进行平滑处理,得到第n目标相机参数,并将第n目标相机参数作为目标相机参数。
在一些实施例中,第n平滑时刻的相机参数为,在第n平滑时刻,对第三虚拟相机的相机参数进行平滑处理之前的相机参数。
在一些实施例中,平滑指数,用于指示相机参数的平滑程度。第n-1目标相机参数为,在第n-1平滑时刻,对第三虚拟相机的相机参数进行平滑处理所得到的目标相机参数。
作为示例,当n=2时,获取平滑指数、以及第1目标相机参数,其中,第1目标相机参数为,在第1平滑时刻,对第三虚拟相机的相机参数进行平滑处理所得到的目标相机参数。
作为示例,当n=3时,获取平滑指数、以及第2目标相机参数,其中,第2目标相机参 数为,在第2平滑时刻,对第三虚拟相机的相机参数进行平滑处理所得到的目标相机参数。
作为示例,当n=3时,获取平滑指数、以及第2目标相机参数,其中,第2目标相机参数为,在第2平滑时刻,对第三虚拟相机的相机参数进行平滑处理所得到的目标相机参数。
在一些实施例中,平滑指数介于0至1之间,平滑指数,用于指示相机参数的平滑程度,平滑指数越高,相应的相机参数的平滑程度越高,平滑指数的具体设定,可以根据不同的应用场景而具体设定。
在一些实施例中,平滑指数介于0至1之间,上述基于平滑指数及第n-1目标相机参数,对第n平滑时刻的相机参数进行平滑处理,得到第n目标相机参数,并将第n目标相机参数作为目标相机参数可以通过如下方式实现:将第n平滑时刻的相机参数和平滑指数的乘积,确定为第三参考参数;将第n-1目标相机参数和补充平滑指数的乘积,确定为第四参考参数,其中,补充平滑指数是平滑指数与1的差值;将第三参考参数和第四参考参数相加,得到第n目标相机参数,并将第n目标相机参数作为目标相机参数。
在一些实施例中,第三参考参数的表达式可以为:
T3=k3βn    (12)
其中,T3表征第三参考参数,k3表征平滑指数,βn表征第n平滑时刻的相机参数。
在一些实施例中,第四参考参数的表达式可以为:
T4=(1-k3n-1     (13)
其中,T2表征第四参考参数,k3表征平滑指数,αn-1表征第n-1目标相机参数,1-k3表征补充平滑指数。
在一些实施例中,第n目标相机参数的表达式可以为:
αn=T3+T4=k3βn+(1-k3n-1      (14)
其中,αn表征第n目标相机参数,T3表征第三参考参数,T4表征第四参考参数。
如此,通过对第三虚拟相机的n个平滑时刻中的每个平滑时刻进行平滑,从而实现了对第三虚拟相机的每个平滑时刻的相机参数的平滑,使得任意两个相邻的平滑时刻的相机参数之间的差距不会发生突变,从而实现了相机参数的平滑。
在步骤203中,在虚拟场景中配置第四虚拟相机,第四虚拟相机的焦点与第三虚拟相机的焦点相对应。
在一些实施例中,第四虚拟相机的焦点与第三虚拟相机的焦点相对应的含义可以是第四虚拟相机的焦点与第三虚拟相机的焦点之间的距离小于焦点距离阈值。即,第四虚拟相机的焦点与第三虚拟相机的焦点之间的距离可以为0,当第四虚拟相机的焦点与第三虚拟相机的焦点之间的距离为0时,第四虚拟相机的焦点与第三虚拟相机的焦点的位置重合。
在一些实施例中,第四虚拟相机需要和第三虚拟相机保持相同的透视关系,通过将第四虚拟相机的焦点设置为与第三虚拟相机的焦点之间的距离小于焦点距离阈值,即,第三虚拟相机的焦点位置和第四虚拟相机的焦点位置较近,从而保证了第四虚拟相机和第三虚拟相机保持相同的透视关系。
如此,通过在虚拟场景中配置与第三虚拟相机的焦点之间的距离小于焦点距离阈值的第四虚拟相机,从而实现了第四虚拟相机始终跟随第三虚拟相机的拍摄方向进行拍摄,从而实现了在虚拟场景中配置与第三虚拟相机具有相同拍摄功能,且透视关系相同的第四虚拟相机,替代第三虚拟相机基于图像数据渲染得到包括对象的虚拟场景的图像。
在步骤204中,基于第三虚拟相机的目标相机参数,对第四虚拟相机的相机参数进行调整,得到调整后的第四虚拟相机。
在一些实施例中,调整后的第四虚拟相机,用于渲染得到虚拟场景的图像。
在一些实施例中,第三虚拟相机的目标相机参数包括第三虚拟相机的目标姿态角参数、第三虚拟相机的目标视场角参数和第三虚拟相机的目标位置参数中至少之一。上述步骤204可以通过如下方式实现:基于第三虚拟相机的目标姿态角参数、第三虚拟相机的目标视场角 参数和第三虚拟相机的目标位置参数中至少之一,对第四虚拟相机的相机参数进行调整,得到调整后的第四虚拟相机。
在一些实施例中,上述步骤204可以通过如下方式实现:将第四虚拟相机当前的相机参数,调整为目标相机参数,得到调整后的第四虚拟相机。
在一些实施例中,上述将第四虚拟相机当前的相机参数,调整为目标相机参数,得到调整后的第四虚拟相机可以通过如下方式实现:基于第n目标相机参数,将第四虚拟相机的第n-1平滑时刻的相机参数,调整为第n目标相机参数,得到调整后的第四虚拟相机。
在一些实施例中,在调整为目标相机参数之后,还可以执行以下处理:响应于针对目标相机参数的调整指令,对目标相机参数进行调整,以得到调整后的第四虚拟相机。
在一些实施例中,还可以通过如下方式使第四虚拟相机渲染得到的图像产生抖动效果:响应于针对调整后的第四虚拟相机的抖动参数添加指令,在调整后的第四虚拟相机的相机参数中添加抖动参数,以使添加抖动参数的第四虚拟相机渲染得到的图像产生晃动效果。
如此,通过在调整后的第四虚拟相机的相机参数中添加抖动参数,从而实现了防抖处理的逆向运用,可以使得第四虚拟相机渲染得到的图像产生晃动效果,模仿出真实场景中的地震效果,使得调整后的第四虚拟相机所渲染出的画面更加真实。
如此,通过获取与物理相机绑定的第一虚拟相机的相机参数,对第一虚拟相机的相机参数进行平滑处理,得到目标相机参数,并基于目标相机参数,对第二虚拟相机的相机参数进行调整,得到调整后的第二虚拟相机,通过调整后的第二虚拟相机,对物理相机所采集的图像数据进行渲染得到包括对象的虚拟场景的图像;在虚实结合的画面渲染过程中,由于第一虚拟相机与真实场景中的物理相机具有绑定关系,使得第一虚拟相机与该物理相机具有相同的相机参数,进而使得对第一虚拟相机的相机参数的平滑,相当于对物理相机的相机参数进行了平滑,更通过配置和第一虚拟相机的焦点相对应的第二虚拟相机,实现平滑处理所得到的目标相机参数到第二虚拟相机的转移,如此,使得真实场景中的物理相机无需硬件稳定器的辅助,即使物理相机处发生抖动,第二虚拟相机处的相机参数也可保持稳定,有效提升了虚拟相机的稳定性能,且节省了在物理相机上加装硬件稳定器的硬件成本,从而显著降低了硬件成本。
下面,将说明本申请实施例在一个实际的虚实融合的应用场景中的示例性应用。
在虚实融合的应用场景中,虚拟物体和现实世界可以显示在同一视线中,通过本申请实施例提供的虚拟相机的参数处理方法,可以有效提高虚实融合的画面观感,例如,在直播、视频制作应用场景中,能够在自由控制物理相机进行拍摄,且保证画面稳定,提升画面观感,对于物理对象(即上文所描述的真实场景中的对象)的跟拍能够取得远超手动运镜的平滑运镜效果,提升画面质感。在教育、线上会议、互动游戏等应用场景中,能够实现高品质的运镜效果,例如,物理对象(人物)可以站在星空(虚拟场景中)。
本申请实施例通过将虚拟场景中的原始虚实融合相机(即上文所描述的第一虚拟相机)的相机参数进行平滑处理,得到平滑后的相机参数,并将平滑后的相机参数配置给平滑虚拟相机(即上文所描述的第二虚拟相机),从而有效避免了由于原始虚实融合相机的抖动而导致影响画面渲染效果,从而有效提升了画面渲染效果。
本申请实施例可以在虚实融合的应用场景中,在虚拟场景中配置一个平滑虚拟相机,平滑虚拟相机的焦点坐标与虚拟场景中原始虚实融合相机的焦点坐标重合,如此,可以使得平滑虚拟相机的方向与虚拟场景中原始虚拟融合相机的方向保持一致,从而实现自动跟拍、防抖动等功能。
需要注意的是,如果平滑虚拟相机与原始虚实融合相机的焦点只存在微小的差距,透视关系也是基本正确的。也即,平滑虚拟相机和原始虚实融合相机的焦点距离小于距离阈值,距离阈值的大小可以根据应用场景而具体设定。
在一些实施例中,参见图12,图12是本申请实施例提供的虚拟相机的参数处理方法的原理示意图。虚拟场景中的原始虚拟相机可以是用户选择的虚拟相机,原始虚拟相机可以是与物理相机具有绑定关系的虚拟相机,也可以是与物理相机不具有绑定关系的虚拟相机。原 始虚拟相机的相机参数包括姿态角、视场角和三维坐标,对原始虚拟相机的相机参数进行平滑处理,得到平滑后的相机参数可以通过如下方式实现:对姿态角、视场角和三维坐标分别进行平滑处理,得到平滑后的姿态角、平滑后的视场角和平滑后的三维坐标。下面,结合图12对姿态角、视场角和三维坐标的平滑处理分别进行说明。
首先,对姿态角的平滑处理过程进行说明,参见图12和图9,图12是本申请实施例提供的虚拟相机的参数处理方法的原理示意图。响应于针对原始虚拟相机的实时切换操作,确定待平滑的虚拟相机(即上文所描述的第一虚拟相机);调用姿态角平滑模块,将待平滑的虚拟相机的相机参数进行平滑处理,得到平滑后的姿态角。
在一些实施例中,虚拟场景中的原始虚拟相机可以是用户选择的虚拟相机,原始虚拟相机可以是与物理相机具有绑定关系的虚拟相机,也可以是与物理相机不具有绑定关系的虚拟相机。
作为示例,参见图5,图5是本申请实施例提供的虚拟相机的参数处理方法的原理示意图。虚拟场景中的原始虚拟相机包括虚拟相机2、虚拟相机3和虚拟相机4,其中,虚拟相机2可以是与物理相机1绑定的虚拟相机,物理相机1是真实世界中存在的相机,虚拟相机3可以是看向虚拟相机2的虚拟相机,虚拟相机4可以是基于选择操作选定的任意虚拟相机,虚拟相机3和虚拟相机4与物理相机1不具有绑定关系。
参见图8,响应于针对原始虚拟相机的实时切换操作,确定待平滑的虚拟相机,当待平滑的虚拟相机是与物理相机绑定的虚拟相机(即上文所描述的第一虚拟相机)时,基于虚拟融合对象模块42确定世界坐标系下各三维骨骼点46的坐标;基于虚实融合相机模块43确定与物理相机绑定的虚拟相机的空间位置44,基于各三维骨骼点的坐标和与物理相机绑定的虚拟相机的位置,确定与物理相机绑定的虚拟相机的姿态角47;调用姿态角平滑模块,对与物理相机绑定的虚拟相机的姿态角47进行平滑处理,得到平滑后姿态角。
在一些实施例中,世界坐标系下各三维骨骼点的坐标(即上文所描述的骨骼点坐标)可以通过加权平均的方式,得到加权平均值,将加权平均值作为对象在虚拟世界中的三维坐标。
在一些实施例中,基于各三维骨骼点的坐标和与物理相机绑定的虚拟相机的位置,确定与物理相机绑定的虚拟相机的姿态角可以通过如下方式实现:以与物理相机绑定的虚拟相机的位置为起点,以对象在虚拟世界中的三维坐标为终点,确定目标向量;基于目标向量确定与物理相机绑定的虚拟相机的姿态角,其中,目标向量的表达式可以为:
v=(x,y,z)      (15)
其中,v表征目标向量,x表征目标向量在世界坐标系的横轴方向的分量,y表征目标向量在世界坐标系的纵轴方向的分量,z表征目标向量在世界坐标系的竖轴方向的分量。
其中,与物理相机绑定的虚拟相机的姿态角包括滚转角、俯仰角和航向角,姿态角的表达式分别为:
G1=0    (16)
G2=asin(z)     (17)
G3=atan2(y,x)     (18)
其中,G1表征滚转角,G2表征俯仰角,G3表征航向角。
在一些实施例中,在姿态角平滑模块中,可以通过如下方式对姿态角进行平滑:调用滤波器对姿态角进行平滑处理,得到平滑后的姿态角。
其中,平滑处理的表达式可以为:
αn=(1-k1n-1+k1βn    (19)
其中,αn表征n时刻平滑后的姿态角,αn-1表征n-1时刻平滑后的姿态角,βn表征n时刻平滑前的姿态角,k1表征姿态角平滑指数,k1∈[0,1],姿态角平滑指数表征姿态角被平滑的程度。
参见图8,响应于针对原始虚拟相机的实时切换操作,确定待平滑的虚拟相机,当待平滑的虚拟相机是通过鼠标、触控操作48而设置姿态角的虚拟相机49(即上文所描述的第三虚拟相机)时,获取设置的姿态角,调用姿态角平滑模块,对所获取的姿态角进行平滑处理,得到平滑后姿态角。
参见图8,响应于针对原始虚拟相机的实时切换操作,确定待平滑的虚拟相机,当待平滑的虚拟相机是看向与物理相机绑定的虚拟相机方向的虚拟相机时,获取看向与物理相机绑定的虚拟相机方向的虚拟相机的姿态角45,调用姿态角平滑模块,对姿态角45进行平滑处理,得到平滑后姿态角。
参见图8,下面,对姿态角平滑模块进行平滑处理的处理过程进行说明。判断姿态角是否为四元数类型的姿态角,当姿态角是四元数类型的姿态角时,对四元数类型的姿态角进行逐个元素平滑,得到平滑后的四元数类型的姿态角,将平滑后的四元数类型的姿态角转换为欧拉角类型。当姿态角是欧拉角类型的姿态角时,对欧拉角类型的姿态角进行逐个元素平滑,得到平滑后的四元数类型的姿态角。
然后,对视场角的平滑处理过程进行说明,参见图13,图13是本申请实施例提供的虚拟相机的参数处理方法的原理示意图。响应于针对原始虚拟相机的实时切换操作,确定待平滑的虚拟相机,当待平滑的虚拟相机是与物理相机绑定的虚拟相机时,基于虚实融合实时标定模块50确定与物理相机绑定的虚拟相机的视场角;当待平滑的虚拟相机不是与物理相机绑定的虚拟相机时,获取通过鼠标、触控操作而设置的视场角;调用视场角实时平滑模块,对视场角进行平滑处理,得到平滑后姿态角。
参见图13,下面,对视场角实时平滑模块进行平滑处理的处理过程进行说明。调用时域滤波器,对输入的视场角进行平滑处理,得到视场角实时平滑模块的输出。
在一些实施例中,在视场角实时平滑模块中,可以通过如下方式对视场角进行平滑:调用时域滤波器,对输入的视场角进行平滑处理,得到视场角实时平滑模块的输出。
其中,平滑处理的表达式可以为:
θn=(1-k2n-1+k2εn      (20)
其中,θn表征n时刻平滑后的视场角,θn-1表征n-1时刻平滑后的视场角,εn表征n时刻平滑前的视场角,k2表征视场角平滑指数,k2∈[0,1],视场角平滑指数表征视场角被平滑的程度。
最后,对三维坐标的平滑处理过程进行说明,参见图14,图14是本申请实施例提供的虚拟相机的参数处理方法的原理示意图。响应于针对原始虚拟相机的实时切换操作,确定待平滑的虚拟相机,当待平滑的虚拟相机是与物理相机绑定的虚拟相机时,基于虚实融合实时标定模块51确定与物理相机绑定的虚拟相机的三维坐标;对三维坐标进行逐元素平滑,得到平滑后相机三维坐标。
在一些实施例中,逐元素平滑的表达式可以为:
τn=(1-k3n-1+k3ωn       (21)
其中,τn表征n时刻平滑后相机三维坐标,τn-1表征n-1时刻平滑后相机三维坐标,ωn表征n时刻平滑后相机三维坐标,k3表征三维坐标平滑指数,k3∈[0,1],三维坐标平滑指数表征视场角被平滑的程度。
在一些实施例中,可以通过后期给虚拟稳定器的相机添加姿态角的抖动(反向利用防抖特性),来模拟出地震导致相机晃动,手持相机晃动的效果,来提升运镜质感。
在一些实施例中,平滑处理的方式不限于上述一阶滤波器(Infinite Impulse Response,IIR),其他阶数的滤波器,或者卡尔曼滤波等滤波器均可以实现平滑处理。
在一些实施例中,对于没有与物理相机绑定的虚拟相机,可以根据虚拟对象投影到虚拟相机成像面的大小,自动设置视场角,从而实现虚拟相机的自动推拉摇移。
可以理解的是,在本申请实施例中,涉及到相机参数等相关的数据,当本申请实施例运 用到具体产品或技术中时,需要获得用户许可或者同意,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
下面继续说明本申请实施例提供的虚拟相机的参数处理装置455的实施为软件模块的示例性结构,在一些实施例中,如图3所示,存储在存储器450的虚拟相机的参数处理装置455中的软件模块可以包括:获取模块4551,配置为获取第一虚拟相机的相机参数,第一虚拟相机与真实场景中的物理相机具有绑定关系,物理相机,用于对真实场景中的对象进行图像数据采集,得到对象的图像数据;平滑模块4552,配置为对第一虚拟相机的相机参数进行平滑处理,得到目标相机参数;调整模块4553,配置为基于目标相机参数,对虚拟场景中的第二虚拟相机的相机参数进行调整,得到调整后的第二虚拟相机,第二虚拟相机的焦点与所述第一虚拟相机的焦点相对应;其中,调整后的第二虚拟相机,配置为基于图像数据渲染得到包括对象的虚拟场景的图像。
在一些实施例中,上述装置还包括配置模块,配置为在虚拟场景中配置第二虚拟相机。
在一些实施例中,当相机参数包括姿态角时,上述获取模块4551,还配置为获取对象在世界坐标系下的目标位置、及第一虚拟相机的位置;基于对象在世界坐标系下的目标位置和第一虚拟相机的位置,确定目标方向向量,目标方向向量用于在世界坐标系下,指示第一虚拟相机指向对象的方向;基于目标方向向量,确定第一虚拟相机的姿态角。
在一些实施例中,姿态角包括俯仰角和航向角;上述获取模块4551,还配置为将目标方向向量的竖轴分量的余弦值,确定为第一虚拟相机的俯仰角,竖轴分量是目标方向向量在世界坐标系的竖轴上的分量;将目标方向向量的纵轴分量和横轴分量的比值,确定为参考比值,纵轴分量是目标方向向量在世界坐标系的纵轴上的分量,横轴分量是目标方向向量在世界坐标系的横轴上的分量;将参考比值的正切值,确定为第一虚拟相机的航向角。
在一些实施例中,当对象的数量为一个时,上述获取模块4551,还配置为获取对象在世界坐标系下的多个骨骼点坐标;对多个骨骼点坐标进行加权求和,得到对象在世界坐标系下的目标位置。
在一些实施例中,当对象的数量为至少两个时,上述获取模块4551,还配置为针对各对象分别执行以下处理:获取对象在世界坐标系下的多个骨骼点坐标;对多个骨骼点坐标进行加权求和,得到对象在世界坐标系下的位置;基于各对象在世界坐标系下的位置,确定目标位置,其中,目标位置与各对象在世界坐标系下的位置之间的距离小于距离阈值。
在一些实施例中,第一虚拟相机存在n个平滑时刻,n为大于1的正整数;上述平滑模块4552,还配置为当第一虚拟相机的相机参数包括第n平滑时刻的相机参数时,获取平滑指数、及第n-1目标相机参数;其中,平滑指数,用于指示相机参数的平滑程度;第n-1目标相机参数为,在第n-1平滑时刻,对第一虚拟相机的相机参数进行平滑处理所得到的目标相机参数;基于平滑指数及第n-1目标相机参数,对第n平滑时刻的相机参数进行平滑处理,得到第n目标相机参数,并将第n目标相机参数作为目标相机参数。
在一些实施例中,平滑指数介于0至1之间,上述平滑模块4552,还配置为将第n平滑时刻的相机参数和平滑指数的乘积,确定为第一参考参数;将第n-1目标相机参数和补充平滑指数的乘积,确定为第二参考参数,其中,补充平滑指数是平滑指数与1的差值;将第一参考参数和第二参考参数相加,得到第n目标相机参数,并将第n目标相机参数作为目标相机参数。
在一些实施例中,当相机参数包括姿态角时,姿态角包括俯仰角、航向角和滚转角;上述虚拟相机的参数处理装置,还包括:锁定模块,配置为响应于针对姿态角中目标角的锁定指令,对目标角进行锁定;其中,目标角包括俯仰角、航向角和滚转角中至少之一;上述平滑模块,还配置为对姿态角中除目标角以外的部分进行平滑处理,得到目标相机参数。
在一些实施例中,上述调整模块4553,还配置为将第二虚拟相机当前的相机参数,调整为目标相机参数,得到调整后的第二虚拟相机;上述虚拟相机的参数处理装置,还包括:指令调整模块,配置为响应于针对目标相机参数的调整指令,对目标相机参数进行调整,以得到调整后的第二虚拟相机。
在一些实施例中,当相机参数包括姿态角时,上述平滑模块4552,还配置为获取姿态角的数据类型,其中,数据类型包括四元数类型和欧拉角类型;当数据类型为四元数类型时,对四元数类型的姿态角中的每个元素进行平滑处理,得到四元数类型的参考姿态角;将四元数类型的参考姿态角进行数据类型转换,得到欧拉角类型的参考姿态角,并将欧拉角类型的参考姿态角,确定为目标姿态角。
在一些实施例中,上述虚拟相机的参数处理装置,还包括:选择模块,配置为确定虚拟场景中多个已配置的虚拟相机,其中,各已配置的虚拟相机分别与不同的物理相机绑定;响应于针对多个已配置的虚拟相机的选择操作,将被选中的虚拟相机确定为第一虚拟相机。
在一些实施例中,上述虚拟相机的参数处理装置,还包括:第二获取模块,配置为获取第三虚拟相机的相机参数,第三虚拟相机与真实场景中的物理相机不具有绑定关系;第二平滑模块,配置为对第三虚拟相机的相机参数进行平滑处理,得到第三虚拟相机的目标相机参数;第二配置模块,配置为在虚拟场景中配置第四虚拟相机,第四虚拟相机的焦点与第三虚拟相机的焦点相对应;第二调整模块,配置为基于第三虚拟相机的目标相机参数,对第四虚拟相机的相机参数进行调整,得到调整后的第四虚拟相机;其中,调整后的第四虚拟相机,配置为渲染得到虚拟场景的图像。
在一些实施例中,当第三虚拟相机的相机参数包括姿态角时,上述第二获取模块,还配置为获取第三虚拟相机的焦点位置的位置参数;基于焦点位置的位置参数和第三虚拟相机的位置参数,确定第三虚拟相机的方向向量;基于第三虚拟相机的方向向量,确定第三虚拟相机的姿态角。
在一些实施例中,当相机参数包括视场角时,上述第二获取模块,还配置为当第三虚拟相机的视场范围内存在虚拟对象时,获取虚拟距离,虚拟距离是第一位置与第二位置间的距离,第一位置是第三虚拟相机在世界坐标系下的位置,第二位置是虚拟对象在世界坐标系下的位置;基于虚拟距离,确定第三虚拟相机的视场角,其中,虚拟距离的数值大小与视场角的数值大小成正比。
在一些实施例中,上述虚拟相机的参数处理装置,还包括:抖动模块,配置为响应于针对调整后的第四虚拟相机的抖动参数添加指令,在调整后的第四虚拟相机的相机参数中添加抖动参数,以使添加抖动参数的第四虚拟相机渲染得到的图像产生晃动效果。
本申请实施例提供了一种计算机程序产品,该计算机程序产品包括计算机程序或计算机可执行指令,该计算机程序或计算机可执行指令存储在计算机可读存储介质中。电子设备的处理器从计算机可读存储介质读取该计算机可执行指令,处理器执行该计算机可执行指令,使得该电子设备执行本申请实施例上述的虚拟相机的参数处理方法。
本申请实施例提供一种存储有计算机可执行指令的计算机可读存储介质,其中存储有计算机可执行指令,当计算机可执行指令被处理器执行时,将引起处理器执行本申请实施例提供的虚拟相机的参数处理方法,例如,如图4示出的虚拟相机的参数处理方法。
在一些实施例中,计算机可读存储介质可以是只读存储器(Read-Only Memory,ROM)、随即存储器(Random Access Memory,RAM)、可擦写可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)、电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、闪存、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种电子设备。
在一些实施例中,计算机可执行指令可以采用程序、软件、软件模块、脚本或代码的形式,按任意形式的编程语言(包括编译或解释语言,或者声明性或过程性语言)来编写,并且其可按任意形式部署,包括被部署为独立的程序或者被部署为模块、组件、子例程或者适合在计算环境中使用的其它单元。
作为示例,计算机可执行指令可以但不一定对应于文件系统中的文件,可以可被存储在保存其它程序或数据的文件的一部分,例如,存储在超文本标记语言(HTML,Hyper Text Markup Language)文档中的一个或多个脚本中,存储在专用于所讨论的程序的单个文件中,或者,存储在多个协同文件(例如,存储一个或多个模块、子程序或代码部分的文件)中。
作为示例,计算机可执行指令可被部署为在一个电子设备上执行,或者在位于一个地点的多个电子设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个电子设备上执行。
综上,本申请实施例具有如下有益效果:
(1)通过获取与物理相机绑定的第一虚拟相机的相机参数,对第一虚拟相机的相机参数进行平滑处理,得到目标相机参数,并基于目标相机参数,对第二虚拟相机的相机参数进行调整,得到调整后的第二虚拟相机,通过调整后的第二虚拟相机,对物理相机所采集的图像数据进行渲染得到包括对象的虚拟场景的图像;在虚实结合的画面渲染过程中,由于第一虚拟相机与真实场景中的物理相机具有绑定关系,使得第一虚拟相机与该物理相机具有相同的相机参数,进而使得对第一虚拟相机的相机参数的平滑,相当于对物理相机的相机参数进行了平滑,更通过配置和第一虚拟相机的焦点相对应的第二虚拟相机,实现平滑处理所得到的目标相机参数到第二虚拟相机的转移,如此,使得真实场景中的物理相机无需硬件稳定器的辅助,即使物理相机处发生抖动,第二虚拟相机处的相机参数也可保持稳定,有效提升了虚拟相机的稳定性能,且节省了在物理相机上加装硬件稳定器的硬件成本,从而显著降低了硬件成本。
(2)通过在虚拟场景中配置与每个真实相机绑定的虚拟相机,并将绑定的虚拟相机作为已配置的虚拟相机,;响应于针对多个已配置的虚拟相机的选择操作,将被选中的虚拟相机确定为第一虚拟相机,从而实现根据选择确定需要平滑的虚拟相机的相机参数,在物理相机的配置量较多的情形下,与物理相机绑定的虚拟相机的配置量也会急剧增加,通过针对已配置的虚拟相机的选择操作,确定平滑对象,从而不至于对每个虚拟相机的相机参数进行平滑处理,而是有选择性的选择虚拟相机,针对所选择的虚拟相机的相机参数进行平滑处理,从而有效减少了需进行相机参数平滑处理的虚拟相机的数量,有效较少了计算量,提高了平滑效率。
(3)通过对多个骨骼点坐标进行加权求和,得到对象在世界坐标系下的目标位置,从而准确确定出对象在世界坐标系下的目标位置,便于后续基于目标位置确定第一虚拟相机的姿态角,有效提高了所确定的第一虚拟相机的姿态角的准确性。
(4)当对象的数量为至少两个时,可以通过分别确定各对象在世界坐标系下的位置,将距离各对象在世界坐标系下的位置之间的距离小于距离阈值的位置确定为目标位置,从而准确的确定出目标位置,便于后续基于目标位置确定第一虚拟相机的姿态角,有效提高了所确定的第一虚拟相机的姿态角的准确性。
(5)通过基于目标方向向量,准确的确定第一虚拟相机的俯仰角和航向角,从而便于后续基于准确的俯仰角和航向角进行平滑处理,有效提高了俯仰角和航向角的准确性。
(6)通过对第一虚拟相机的n个平滑时刻中的每个平滑时刻进行平滑,从而实现了对第一虚拟相机的每个平滑时刻的相机参数的平滑,使得任意两个相邻的平滑时刻的相机参数之间的差距不会发生突变,从而实现了相机参数的平滑。
(7)通过对姿态角中除目标角以外的部分进行平滑处理,从而实现了不同的姿态角的逐步平滑,或者部分平滑,从而保证了姿态角平滑处理的可控性,满足各种不同应用场景下的平滑需求,同时,可以实现渐进式平滑,有限减少了平滑过程中的出错率,提高了平滑的准确性。
(8)通过基于姿态角的数据类型,对不同类型的姿态角进行转换、平滑处理,从而有效提高了姿态角平滑处理的普适性。
(9)通过在虚拟场景中配置与第一虚拟相机的焦点之间的距离小于焦点距离阈值的第二虚拟相机,从而实现了第二虚拟相机始终跟随第一虚拟相机的拍摄方向进行拍摄,从而实现了在虚拟场景中配置与第一虚拟相机具有相同拍摄功能,且透视关系相同的第二虚拟相机,替代第一虚拟相机基于图像数据渲染得到包括对象的虚拟场景的图像。
(10)通过基于第三虚拟相机的方向向量,准确的确定第三虚拟相机的俯仰角和航向角,从而便于后续基于准确的俯仰角和航向角进行平滑处理,有效提高了俯仰角和航向角的准确 性。
(11)通过虚拟距离的数值大小,动态控制第三虚拟相机的视场角,从而使得第三虚拟相机的视场角随着虚拟距离的数值大小变化而相应变化,从而实现了第三虚拟相机的视场角的动态调控,实现了第三虚拟相机的视场角自动的推拉摇移,有效提升了第三虚拟相机的运镜效果。
(12)通过在调整后的第四虚拟相机的相机参数中添加抖动参数,从而实现了防抖处理的逆向运用,可以使得第四虚拟相机渲染得到的图像产生晃动效果,模仿出真实场景中的地震效果,使得调整后的第四虚拟相机所渲染出的画面更加真实。
(13)通过在虚拟场景中配置与第三虚拟相机的焦点之间的距离小于焦点距离阈值的第四虚拟相机,从而实现了第四虚拟相机始终跟随第三虚拟相机的拍摄方向进行拍摄,从而实现了在虚拟场景中配置与第三虚拟相机具有相同拍摄功能,且透视关系相同的第四虚拟相机,替代第三虚拟相机基于图像数据渲染得到包括对象的虚拟场景的图像。
(14)通过后期给虚拟稳定器的相机添加姿态角的抖动(反向利用防抖特性),来模拟出地震导致相机晃动,手持相机晃动的效果,来提升运镜质感。
以上所述,仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和范围之内所作的任何修改、等同替换和改进等,均包含在本申请的保护范围之内。

Claims (19)

  1. 一种虚拟相机的参数处理方法,所述方法由电子设备执行,所述方法包括:
    获取虚拟场景中第一虚拟相机的相机参数,所述第一虚拟相机与真实场景中的物理相机具有绑定关系,所述物理相机,用于对所述真实场景中的对象进行图像数据采集,得到所述对象的图像数据;
    对所述第一虚拟相机的相机参数进行平滑处理,得到目标相机参数;
    基于所述目标相机参数,对虚拟场景中的第二虚拟相机的相机参数进行调整,得到调整后的第二虚拟相机,所述第二虚拟相机的焦点与所述第一虚拟相机的焦点相对应;
    其中,所述调整后的第二虚拟相机,用于基于所述图像数据,渲染得到包括所述对象的虚拟场景的图像。
  2. 根据权利要求1所述的方法,其中,当所述相机参数包括姿态角时,所述获取虚拟场景中第一虚拟相机的相机参数,包括:
    获取所述对象在世界坐标系下的目标位置、及所述虚拟场景中第一虚拟相机的位置;
    基于所述对象在世界坐标系下的目标位置和所述第一虚拟相机的位置,确定目标方向向量,所述目标方向向量用于在所述世界坐标系下,指示所述第一虚拟相机指向所述对象的方向;
    基于所述目标方向向量,确定所述第一虚拟相机的姿态角。
  3. 根据权利要求2所述的方法,其中,所述姿态角包括俯仰角和航向角;所述基于所述目标方向向量,确定所述第一虚拟相机的姿态角,包括:
    将所述目标方向向量的竖轴分量的余弦值,确定为所述第一虚拟相机的俯仰角,所述竖轴分量是所述目标方向向量在所述世界坐标系的竖轴上的分量;
    将所述目标方向向量的纵轴分量和横轴分量的比值,确定为参考比值,所述纵轴分量是所述目标方向向量在所述世界坐标系的纵轴上的分量,所述横轴分量是所述目标方向向量在所述世界坐标系的横轴上的分量;
    将所述参考比值的正切值,确定为所述第一虚拟相机的航向角。
  4. 根据权利要求2所述的方法,其中,当所述对象的数量为一个时,所述获取所述对象在世界坐标系下的目标位置,包括:
    获取所述对象在所述世界坐标系下的多个骨骼点坐标;
    对所述多个骨骼点坐标进行加权求和,得到所述对象在世界坐标系下的目标位置。
  5. 根据权利要求2所述的方法,其中,当所述对象的数量为至少两个时,所述获取所述对象在世界坐标系下的目标位置,包括:
    针对各所述对象分别执行以下处理:获取所述对象在所述世界坐标系下的多个骨骼点坐标;对所述多个骨骼点坐标进行加权求和,得到所述对象在所述世界坐标系下的位置;
    基于各所述对象在所述世界坐标系下的位置,确定所述目标位置,其中,所述目标位置与各所述对象在所述世界坐标系下的位置之间的距离小于距离阈值。
  6. 根据权利要求1至5任一项所述的方法,其中,所述第一虚拟相机存在n个平滑时刻,n为大于1的正整数;
    所述对所述第一虚拟相机的相机参数进行平滑处理,得到目标相机参数,包括:
    当所述第一虚拟相机的相机参数包括第n平滑时刻的相机参数时,获取平滑指数、及第n-1目标相机参数;
    其中,所述平滑指数,用于指示所述相机参数的平滑程度;
    所述第n-1目标相机参数为,在第n-1平滑时刻,对所述第一虚拟相机的相机参数进行平滑处理所得到的目标相机参数;
    基于所述平滑指数及第n-1目标相机参数,对所述第n平滑时刻的相机参数进行平滑处理,得到第n目标相机参数,并将所述第n目标相机参数作为所述目标相机参数。
  7. 根据权利要求6所述的方法,其中,所述平滑指数介于0至1之间,所述基于所述平滑 指数及第n-1目标相机参数,对所述第n平滑时刻的相机参数进行平滑处理,得到第n目标相机参数,并将所述第n目标相机参数作为所述目标相机参数,包括:
    将所述第n平滑时刻的相机参数和所述平滑指数的乘积,确定为第一参考参数;
    将所述第n-1目标相机参数和补充平滑指数的乘积,确定为第二参考参数,其中,所述补充平滑指数是所述平滑指数与1的差值;
    将所述第一参考参数和所述第二参考参数相加,得到所述第n目标相机参数,并将所述第n目标相机参数作为所述目标相机参数。
  8. 根据权利要求1至7任一项所述的方法,其中,当所述相机参数包括姿态角时,所述姿态角包括俯仰角、航向角和滚转角;
    所述对所述第一虚拟相机的相机参数进行平滑处理,得到目标相机参数之前,所述方法还包括:
    响应于针对所述姿态角中目标角的锁定指令,对所述目标角进行锁定;其中,所述目标角包括所述俯仰角、航向角和滚转角中至少之一;
    所述对所述第一虚拟相机的相机参数进行平滑处理,得到目标相机参数,包括:
    对所述姿态角中除所述目标角以外的部分进行平滑处理,得到目标相机参数。
  9. 根据权利要求1至8任一项所述的方法,其中,所述基于所述目标相机参数,对虚拟场景中的第二虚拟相机的相机参数进行调整,得到调整后的第二虚拟相机,包括:
    将虚拟场景中第二虚拟相机当前的相机参数,调整为所述目标相机参数,得到调整后的第二虚拟相机;
    在调整为所述目标相机参数之后,所述方法还包括:
    响应于针对所述目标相机参数的调整指令,对所述目标相机参数进行调整,以得到调整后的第二虚拟相机。
  10. 根据权利要求1至9任一项所述的方法,其中,当所述相机参数包括姿态角时,所述对所述第一虚拟相机的相机参数进行平滑处理,得到目标相机参数,包括:
    获取所述姿态角的数据类型,其中,所述数据类型包括四元数类型和欧拉角类型;
    当所述数据类型为所述四元数类型时,对所述四元数类型的姿态角中的每个元素进行平滑处理,得到所述四元数类型的参考姿态角;
    将所述四元数类型的参考姿态角进行数据类型转换,得到所述欧拉角类型的参考姿态角,并将所述欧拉角类型的参考姿态角,确定为目标姿态角。
  11. 根据权利要求1至10任一项所述的方法,其中,所述获取虚拟场景中第一虚拟相机的相机参数之前,所述方法还包括:
    确定所述虚拟场景中多个已配置的虚拟相机,其中,各所述已配置的虚拟相机分别与不同的所述物理相机绑定;
    响应于针对所述多个已配置的虚拟相机的选择操作,将被选中的所述虚拟相机确定为所述第一虚拟相机。
  12. 根据权利要求1至11任一项所述的方法,其中,所述方法还包括:
    获取第三虚拟相机的相机参数,所述第三虚拟相机与所述真实场景中的物理相机不具有绑定关系;
    对所述第三虚拟相机的相机参数进行平滑处理,得到所述第三虚拟相机的目标相机参数;
    在所述虚拟场景中配置第四虚拟相机,所述第四虚拟相机的焦点与所述第三虚拟相机的焦点相对应;
    基于所述第三虚拟相机的目标相机参数,对所述第四虚拟相机的相机参数进行调整,得到调整后的第四虚拟相机;
    其中,所述调整后的第四虚拟相机,用于渲染得到所述虚拟场景的图像。
  13. 根据权利要求12所述的方法,其中,当所述第三虚拟相机的相机参数包括姿态角时,所述获取所述第三虚拟相机的相机参数,包括:
    获取所述第三虚拟相机的焦点位置的位置参数;
    基于所述焦点位置的位置参数和所述第三虚拟相机的位置参数,确定所述第三虚拟相机的方向向量;
    基于所述第三虚拟相机的方向向量,确定所述第三虚拟相机的姿态角。
  14. 根据权利要求12所述的方法,其中,当所述相机参数包括视场角时,所述获取第三虚拟相机的相机参数,包括:
    当所述第三虚拟相机的视场范围内存在虚拟对象时,获取虚拟距离,所述虚拟距离是第一位置与第二位置间的距离,所述第一位置是所述第三虚拟相机在世界坐标系下的位置,所述第二位置是所述虚拟对象在所述世界坐标系下的位置;
    基于所述虚拟距离,确定所述第三虚拟相机的视场角,其中,所述虚拟距离的数值大小与所述视场角的数值大小成正比。
  15. 根据权利要求12所述的方法,其中,所述对所述第四虚拟相机的相机参数进行调整,得到调整后的第四虚拟相机之后,所述方法还包括:
    响应于针对所述调整后的第四虚拟相机的抖动参数添加指令,在所述调整后的第四虚拟相机的相机参数中添加所述抖动参数,以使添加所述抖动参数的第四虚拟相机渲染得到的图像产生晃动效果。
  16. 一种虚拟相机的参数处理装置,所述装置包括:
    获取模块,配置为获取虚拟场景中第一虚拟相机的相机参数,所述第一虚拟相机与真实场景中的物理相机具有绑定关系,所述物理相机,用于对所述真实场景中的对象进行图像数据采集,得到所述对象的图像数据;
    平滑模块,配置为对所述第一虚拟相机的相机参数进行平滑处理,得到目标相机参数;
    调整模块,配置为基于所述目标相机参数,对虚拟场景中的第二虚拟相机的相机参数进行调整,得到调整后的第二虚拟相机,所述第二虚拟相机的焦点与所述第一虚拟相机的焦点相对应;
    其中,所述调整后的第二虚拟相机,用于基于所述图像数据,渲染得到包括所述对象的虚拟场景的图像。
  17. 一种电子设备,所述电子设备包括:
    存储器,配置为存储计算机可执行指令或者计算机程序;
    处理器,配置为执行所述存储器中存储的计算机可执行指令或者计算机程序时,实现权利要求1至15任一项所述的虚拟相机的参数处理方法。
  18. 一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现权利要求1至15任一项所述的虚拟相机的参数处理方法。
  19. 一种计算机程序产品,包括计算机程序或计算机可执行指令,所述计算机程序或计算机可执行指令被处理器执行时实现权利要求1至15任一项所述的虚拟相机的参数处理方法。
PCT/CN2023/114226 2022-09-05 2023-08-22 虚拟相机的参数处理方法、装置、设备、存储介质及程序产品 WO2024051487A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211095639.3A CN117710474A (zh) 2022-09-05 2022-09-05 虚拟相机的参数处理方法、装置、电子设备及存储介质
CN202211095639.3 2022-09-05

Publications (1)

Publication Number Publication Date
WO2024051487A1 true WO2024051487A1 (zh) 2024-03-14

Family

ID=90159393

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/114226 WO2024051487A1 (zh) 2022-09-05 2023-08-22 虚拟相机的参数处理方法、装置、设备、存储介质及程序产品

Country Status (2)

Country Link
CN (1) CN117710474A (zh)
WO (1) WO2024051487A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933758A (zh) * 2015-05-20 2015-09-23 北京控制工程研究所 一种基于osg三维引擎的空间相机三维成像仿真方法
CN106485788A (zh) * 2016-10-21 2017-03-08 重庆虚拟实境科技有限公司 一种游戏引擎混合现实拍摄方法
CN108989688A (zh) * 2018-09-14 2018-12-11 成都数字天空科技有限公司 虚拟相机防抖方法、装置、电子设备及可读存储介质
US11250617B1 (en) * 2019-09-25 2022-02-15 Amazon Technologies, Inc. Virtual camera controlled by a camera control device
CN114760458A (zh) * 2022-04-28 2022-07-15 中南大学 高真实感增强现实演播室虚拟与现实相机轨迹同步的方法
WO2022151864A1 (zh) * 2021-01-18 2022-07-21 海信视像科技股份有限公司 虚拟现实设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933758A (zh) * 2015-05-20 2015-09-23 北京控制工程研究所 一种基于osg三维引擎的空间相机三维成像仿真方法
CN106485788A (zh) * 2016-10-21 2017-03-08 重庆虚拟实境科技有限公司 一种游戏引擎混合现实拍摄方法
CN108989688A (zh) * 2018-09-14 2018-12-11 成都数字天空科技有限公司 虚拟相机防抖方法、装置、电子设备及可读存储介质
US11250617B1 (en) * 2019-09-25 2022-02-15 Amazon Technologies, Inc. Virtual camera controlled by a camera control device
WO2022151864A1 (zh) * 2021-01-18 2022-07-21 海信视像科技股份有限公司 虚拟现实设备
CN114760458A (zh) * 2022-04-28 2022-07-15 中南大学 高真实感增强现实演播室虚拟与现实相机轨迹同步的方法

Also Published As

Publication number Publication date
CN117710474A (zh) 2024-03-15

Similar Documents

Publication Publication Date Title
US11663785B2 (en) Augmented and virtual reality
US11272165B2 (en) Image processing method and device
US9171402B1 (en) View-dependent textures for interactive geographic information system
US8705892B2 (en) Generating three-dimensional virtual tours from two-dimensional images
CN108939556B (zh) 一种基于游戏平台的截图方法及装置
US10049490B2 (en) Generating virtual shadows for displayable elements
KR20070086037A (ko) 장면 간 전환 방법
CN112711458B (zh) 虚拟场景中道具资源的展示方法及装置
US9754398B1 (en) Animation curve reduction for mobile application user interface objects
WO2017128887A1 (zh) 全景图像的校正3d显示方法和系统及装置
US9025007B1 (en) Configuring stereo cameras
CN104740874A (zh) 一种在二维游戏场景中播放视频的方法及系统
CN114926612A (zh) 空中全景图像处理与沉浸式显示系统
JP2000076488A (ja) 3次元仮想空間表示装置及びテクスチャオブジェクト設定情報作成装置
WO2024051487A1 (zh) 虚拟相机的参数处理方法、装置、设备、存储介质及程序产品
CN112827169A (zh) 游戏图像处理方法和装置、存储介质及电子设备
CN116708862A (zh) 直播间的虚拟背景生成方法、计算机设备及存储介质
CN115439634B (zh) 点云数据的交互呈现方法和存储介质
CN108510433B (zh) 空间展示方法、装置及终端
KR20180104915A (ko) 3차원 가상공간 애니메이션 구현 시스템
US20230360333A1 (en) Systems and methods for augmented reality video generation
CN117197319B (zh) 图像生成方法、装置、电子设备及存储介质
CN114219924B (zh) 虚拟场景的适配显示方法、装置、设备、介质及程序产品
Zhu et al. Integrated Co-Designing Using Building Information Modeling and Mixed Reality with Erased Backgrounds for Stock Renovation
CN116485879A (zh) 虚拟相机的控制方法、装置、设备、存储介质及产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23862183

Country of ref document: EP

Kind code of ref document: A1