CN117528237A - Adjustment method and device for virtual camera - Google Patents

Adjustment method and device for virtual camera Download PDF

Info

Publication number
CN117528237A
CN117528237A CN202311450722.2A CN202311450722A CN117528237A CN 117528237 A CN117528237 A CN 117528237A CN 202311450722 A CN202311450722 A CN 202311450722A CN 117528237 A CN117528237 A CN 117528237A
Authority
CN
China
Prior art keywords
camera
entity
virtual
focus
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311450722.2A
Other languages
Chinese (zh)
Inventor
张欢
陈石平
李晓阳
刘杰
张中杰
杨志刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenli Vision Shenzhen Cultural Technology Co ltd
Original Assignee
Shenli Vision Shenzhen Cultural Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenli Vision Shenzhen Cultural Technology Co ltd filed Critical Shenli Vision Shenzhen Cultural Technology Co ltd
Priority to CN202311450722.2A priority Critical patent/CN117528237A/en
Publication of CN117528237A publication Critical patent/CN117528237A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Abstract

One or more embodiments of the present disclosure provide a method and apparatus for adjusting a virtual camera, which are applied to a data processing program running on a computing device; creating a virtual camera corresponding to the entity camera in a virtual three-dimensional space corresponding to the entity three-dimensional space which is simulated on the computing equipment; the virtual camera is used for performing simulated shooting on the virtual scene and displaying the simulated shot image through the entity screen; the entity camera is used for shooting an entity screen; a virtual screen model corresponding to the entity screen is also created in the virtual three-dimensional space; the method comprises the following steps: acquiring focusing data corresponding to the entity camera; determining the position of a first focus after focusing by the entity camera, and determining the relative position relationship between the first focus and the entity screen; and if the first focus is positioned in front of the entity screen, adjusting the position of the second focus of the virtual camera to the virtual screen model, and sending focusing data to the entity camera for focusing.

Description

Adjustment method and device for virtual camera
Technical Field
One or more embodiments of the present disclosure relate to the field of virtual shooting technologies, and in particular, to a method and an apparatus for adjusting a virtual camera.
Background
Virtual photography based on physical screens is an emerging movie and television photography technology that combines physical screens (such as LED screens or LCD screens), real-time computer graphics (Real-time Computer Graphics), and traditional photography techniques, among others. In this technique, a physical screen is used as a background to display a virtual scene designed in advance so that actors and cameras can directly interact with this virtual scene. The display image of the virtual scene on the physical screen is rendered by a computer in real time, and can be adjusted in real time according to shooting requirements. This technique allows the film maker to see the final visual effect at the scene of the shot without waiting for the post-production phase. In this way, the director and actor may better control the camera and performance at the time of shooting to achieve the desired effect.
When virtual shooting is performed based on an entity screen, it is generally required to create a virtual camera in a computer, and render a display image of a virtual scene on the entity screen according to shooting-related data of the virtual camera, that is, to be regarded as an image obtained by performing simulated shooting of the virtual scene by the virtual camera through the entity screen display. The entity camera is used for shooting the entity screen, in particular shooting the live-action and the display image on the entity screen. In practical applications, it is often desirable to increase the realism of a virtual scene in an image taken by a physical camera as much as possible. This degree of realism includes both the degree of matching of the spatial position and perspective relationship of the virtual and real objects and the degree of matching of the virtual focus effect. In the related art, it is generally required that the virtual camera and the data related to photographing of the physical camera be consistent. Therefore, although the matching degree of the space position and the perspective relation of the virtual and real objects can be ensured, the problem of secondary virtual focus of the virtual scene in the image shot by the entity camera can be caused. Specifically, since the virtual camera is consistent with the data related to shooting of the physical camera, in the case that the image shot by the physical camera has a virtual focus effect (for example, the physical camera is not focused correctly as required), the virtual camera follows the shooting parameters of the physical camera, so that the image shot by the virtual camera also has a virtual focus effect, and therefore, the displayed image of the virtual scene on the physical screen has a primary virtual focus effect, in this case, the physical camera which is not focused correctly originally shoots the displayed image on the physical screen again, and the virtual scene part in the obtained image has a secondary virtual focus effect, so that the virtual focus effect of the real scene part in the image and the virtual scene part is not matched.
Disclosure of Invention
One or more embodiments of the present disclosure provide the following technical solutions:
the present disclosure provides a method for adjusting a virtual camera, which is applied to a data processing program running on a computing device; simulating a virtual three-dimensional space corresponding to the physical three-dimensional space on the computing equipment; creating a virtual camera corresponding to the entity camera in the virtual three-dimensional space; the virtual camera is used for performing simulated shooting on a preset virtual scene and displaying a simulated shot image through the entity screen; the entity camera is used for shooting the entity screen; a virtual screen model corresponding to the entity screen is also created in the virtual three-dimensional space; the method comprises the following steps:
acquiring adjustment data corresponding to the entity camera; wherein the adjustment data includes focus data;
determining the position of a first focus after focusing by the entity camera based on the focusing data, and determining the relative position relation between the first focus and the entity screen;
and if the first focus is positioned in front of the entity screen, adjusting the position of the second focus of the virtual camera to the virtual screen model, and sending the adjustment data to the entity camera so as to focus by the entity camera based on the focusing data.
The present disclosure also provides an adjustment device for a virtual camera, applied to a data processing program running on a computing device; simulating a virtual three-dimensional space corresponding to the physical three-dimensional space on the computing equipment; creating a virtual camera corresponding to the entity camera in the virtual three-dimensional space; the virtual camera is used for performing simulated shooting on a preset virtual scene and displaying a simulated shot image through the entity screen; the entity camera is used for shooting the entity screen; a virtual screen model corresponding to the entity screen is also created in the virtual three-dimensional space; the device comprises:
the acquisition module is used for acquiring adjustment data corresponding to the entity camera; wherein the adjustment data includes focus data;
the determining module is used for determining the position of a first focus after the entity camera focuses based on the focusing data and determining the relative position relation between the first focus and the entity screen;
and the adjusting module is used for adjusting the position of the second focus of the virtual camera to the virtual screen model when the first focus is positioned in front of the entity screen, and sending the adjusting data to the entity camera so as to focus by the entity camera based on the focusing data.
The present disclosure also provides an electronic device, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the steps of the method as described in any of the preceding claims by executing the executable instructions.
The present disclosure also provides a computer readable storage medium having stored thereon computer instructions which when executed by a processor perform the steps of the method as described in any of the preceding claims.
In the above technical solution, a virtual three-dimensional space corresponding to an entity three-dimensional space may be simulated on a computing device, and a virtual camera corresponding to an entity camera and a virtual screen model corresponding to an entity screen may be created in the virtual three-dimensional space, where the virtual camera may be used to perform simulated shooting on a preset virtual scene, and display an image obtained by simulated shooting through the entity screen, and the entity camera may be used to perform shooting on the entity screen; when the data processing program on the computing device obtains the adjustment data corresponding to the entity camera and containing focusing data, the data processing program can determine the position of the first focus after the entity camera focuses based on the focusing data, determine the relative position relationship between the first focus and the entity screen, adjust the position of the second focus of the virtual camera to the virtual screen model when the first focus is determined to be positioned in front of the entity screen, and send the adjustment data to the entity camera so as to focus based on the focusing data by the entity camera.
In this way, when virtual shooting is realized, on one hand, the position of the focus of the virtual camera corresponding to the entity camera can be automatically adjusted according to the position of the focus of the current entity camera, so as to reduce the virtual focus effect of the image shot by the virtual camera, avoid the problem of secondary virtual focus, and avoid the problem of manually setting an offset distance for the focus of the virtual camera, therefore, the adjustment efficiency and accuracy are higher, the method is suitable for scenes where the entity camera moves in a large range, and can adapt to screens with various shapes because the position of the focus of the virtual camera can be automatically adjusted; on the other hand, when the focus of the entity camera needs to be controlled to move in front of the entity screen, the adjustment mode of the position of the focus of the virtual camera can be determined according to the predicted relative position relation between the focus of the entity camera and the entity screen, and whether the position of the focus of the entity camera is correspondingly adjusted or not is judged, so that the virtual focus effect of an image shot by the virtual camera can be reduced, and when the entity camera focuses and shoots according to adjustment data, the virtual camera is ensured to adjust the second focus position to the virtual screen model, so that the picture displayed in the screen is clear, and the negative influence caused by the problem of secondary virtual focus is reduced.
Drawings
The drawings that are required for use in the description of the exemplary embodiments will be described below, in which:
fig. 1 shows a schematic diagram of an application scenario according to an embodiment of the present disclosure.
Fig. 2 shows a flowchart of a method of adjusting a virtual camera according to an embodiment of the present disclosure.
Fig. 3 shows a schematic diagram of an entity camera and entity screen according to an embodiment of the present disclosure.
Fig. 4 shows a schematic diagram of another physical camera and physical screen according to an embodiment of the present disclosure.
Fig. 5 shows a schematic structural diagram of an apparatus according to an embodiment of the present disclosure.
Fig. 6 shows a block diagram of an adjustment device of a virtual camera according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary embodiments are not representative of all implementations consistent with one or more embodiments of the present disclosure. Rather, they are merely examples consistent with aspects of one or more embodiments of the present disclosure.
It should be noted that in other embodiments, the steps of the corresponding method are not necessarily performed in the order shown and described in this disclosure. In some other embodiments, the method may include more or fewer steps than described in the present disclosure. Furthermore, individual steps described in this disclosure may be described as being broken down into multiple steps in other embodiments; while various steps described in this disclosure may be combined into a single step in other embodiments.
When virtual shooting is performed based on an entity screen, it is generally required to create a virtual camera in a computer, and render a display image of a virtual scene on the entity screen according to shooting-related data of the virtual camera, that is, to be regarded as an image obtained by performing simulated shooting of the virtual scene by the virtual camera through the entity screen display. The physical camera is used for shooting a physical screen, in particular, shooting a live-action (i.e., a scene in the real world) and a display image on the physical screen. In practical applications, it is often desirable to increase the realism of a virtual scene in an image taken by a physical camera as much as possible. This degree of realism includes both the degree of matching of the spatial position and perspective relationship of the virtual and real objects and the degree of matching of the virtual focus effect. Therefore, it is necessary to ensure that the virtual focus effect of the virtual camera and the virtual focus effect of the physical camera are matched in real time.
The virtual focus is a photographic term, and refers to a subject portion in a photographed image is not clearly presented but becomes blurred or distorted due to a lens focus of a camera being deviated from the photographed subject during photographing. The virtual focus is generally caused by the fact that the focus of the lens of the camera does not coincide with the actual position of the subject photographed. However, the virtual focus is not always negative, and in some special cases the virtual focus effect is deliberately created to achieve a certain artistic effect, for example: creating an abstract and fantasy atmosphere.
In the related art, it is generally required that the virtual camera and the data related to photographing of the physical camera be consistent. The data related to photographing may include internal parameters such as focal length, optical center shift, distortion parameters, pixel size, and external parameters such as position data and attitude data (e.g., rotation angle), and may also include adjustable photographing parameters such as focus and aperture. Therefore, although the matching degree of the space position and the perspective relation of the virtual and real objects can be ensured, the problem of secondary virtual focus of the virtual scene in the image shot by the entity camera can be caused. Specifically, since the virtual camera is consistent with the data related to shooting of the physical camera, in the case that the image shot by the physical camera has a virtual focus effect (for example, the physical camera is not focused correctly as required), the virtual camera follows the shooting parameters of the physical camera, so that the image shot by the virtual camera also has a virtual focus effect, and therefore, the displayed image of the virtual scene on the physical screen has a primary virtual focus effect, in this case, the physical camera which is not focused correctly originally shoots the displayed image on the physical screen again, and the virtual scene part in the obtained image has a secondary virtual focus effect, so that the virtual focus effect of the real scene part in the image and the virtual scene part is not matched.
In order to avoid the problem of secondary virtual focus, a certain offset distance can be manually set for the focus of the virtual camera by a technician so as to reduce the virtual focus effect of an image shot by the virtual camera. However, this way of manually setting the offset distance is inefficient and not highly accurate, is difficult to adapt to scenes where the physical camera moves widely, and generally only adapts to flat screens, whereas for screens of other shapes such as arc-shaped screens, tri-fold screens, etc., the difficulty of manually estimating the focus offset distance of the virtual camera is greater because the shape of these screens is not flat, and therefore, these screens are generally not adapted.
One or more embodiments of the present disclosure provide a solution for adjustment of a virtual camera. In the technical scheme, a virtual three-dimensional space corresponding to an entity three-dimensional space can be simulated on the computing equipment, a virtual camera corresponding to an entity camera and a virtual screen model corresponding to an entity screen are created in the virtual three-dimensional space, wherein the virtual camera can be used for simulating and shooting a preset virtual scene, an image shot through simulation is displayed through the entity screen, and the entity camera can be used for shooting the entity screen; when the data processing program on the computing device obtains the adjustment data corresponding to the entity camera and containing focusing data, the data processing program can determine the position of the first focus after the entity camera focuses based on the focusing data, determine the relative position relationship between the first focus and the entity screen, adjust the position of the second focus of the virtual camera to the virtual screen model when the first focus is determined to be positioned in front of the entity screen, and send the adjustment data to the entity camera so as to focus based on the focusing data by the entity camera.
In this way, when virtual shooting is realized, on one hand, the position of the focus of the virtual camera corresponding to the entity camera can be automatically adjusted according to the position of the focus of the current entity camera, so as to reduce the virtual focus effect of the image shot by the virtual camera, avoid the problem of secondary virtual focus, and avoid the problem of manually setting an offset distance for the focus of the virtual camera, therefore, the adjustment efficiency and accuracy are higher, the method is suitable for scenes where the entity camera moves in a large range, and can adapt to screens with various shapes because the position of the focus of the virtual camera can be automatically adjusted; on the other hand, when the focus of the entity camera needs to be controlled to move in front of the entity screen, the adjustment mode of the focus of the virtual camera can be determined according to the predicted relative position relation between the focus of the entity camera and the entity screen, whether the position of the focus of the entity camera is correspondingly adjusted or not is judged, and when the entity camera focuses and shoots according to adjustment data, the virtual camera is ensured to adjust the second focus position to the virtual screen model, so that the picture displayed in the screen is clear, the virtual focus effect of the image shot by the virtual camera can be reduced, and negative influence caused by the problem of secondary virtual focus is reduced.
The following describes in detail the technical solutions provided by one or more embodiments of the present disclosure.
Referring to fig. 1, fig. 1 shows a schematic diagram of an application scenario according to an embodiment of the disclosure.
As shown in fig. 1, the virtual shooting system includes three physical screens (021, 022, 023), a communication connection can be established between the mobile terminal 01 and the three physical screens (021, 022, 023), and a communication connection can be established between the mobile terminal 01 and the computing device 03. The mobile terminal 01 may control each entity screen to display the identification graphic array and send a plurality of screen images obtained by performing image acquisition on each entity screen to the computing device 03, and the computing device 03 may generate and display a virtual screen model corresponding to the three entity screens (021, 022, 023) according to the plurality of screen images sent by the mobile terminal 01.
The physical screen used in the virtual shooting system may be an LED display screen, a Liquid Crystal (LCD) display screen, or other types, and may be a curved screen or a planar screen, and it should be understood that, according to actual needs, a person skilled in the art may set the type, the number, the size, the resolution, etc. of the physical screen in the virtual shooting system in a user-defined manner, which is not limited to the embodiments of the present disclosure. It should be understood that embodiments of the present disclosure are not limited in the manner in which the devices communicate with each other.
In addition, the virtual photographing system includes an entity camera 04. Similarly, computing device 03 may generate and display a virtual camera corresponding to physical camera 04. The entity camera 04 can take three entity screens (021, 022, 023) under the operation of a person in charge of taking a photograph. The physical camera 04 may also establish a communication connection with the computing device 03.
It should be noted that, a virtual three-dimensional space (for example, a virtual three-dimensional space in which a physical three-dimensional space is restored by 1:1) corresponding to the physical three-dimensional space (that is, a real three-dimensional space in the real world) may be simulated on the computing device 03. Three physical screens (021, 022, 023) and physical camera 04 are real objects in a physical three-dimensional space, and the virtual screen model and the virtual camera are virtual objects created in the virtual three-dimensional space.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for adjusting a virtual camera according to an embodiment of the disclosure.
In this embodiment, the adjustment method of the virtual camera may be implemented in a virtual shooting system. The virtual photographing system may include an entity camera in an entity three-dimensional space, an entity screen, and a computing device on which a virtual three-dimensional space corresponding to the entity three-dimensional space may be simulated, and may further include a virtual screen model corresponding to the entity screen and a virtual camera corresponding to the entity camera created in the virtual three-dimensional space.
The virtual screen model may be a three-dimensional model rendered in the virtual three-dimensional space and uniform with the physical screen in the physical three-dimensional space, such as an outline and a position; the virtual camera may be a three-dimensional model rendered in the virtual three-dimensional space and conforming to an outline, a position, and the like of the physical camera in the physical three-dimensional space.
The virtual camera can be used for performing simulated shooting on a pre-designed virtual scene, mapping a simulated shot image to a virtual screen model, and rendering the virtual screen model to a physical screen for display. Because the virtual screen model is consistent with the parameters of the entity screen, the rendered picture can be better adapted to the entity screen by mapping to the virtual screen model. Accordingly, the physical camera may be used to shoot a physical screen, where the image shot by the physical camera includes a virtual scene portion.
In general, the virtual camera and the physical camera are kept in agreement with each other in terms of internal parameters such as focal length, optical center shift, distortion parameters, and pixel size. External parameters such as position data and posture data of the virtual camera in the virtual three-dimensional space are consistent with external parameters such as position data and posture data of the physical camera in the physical three-dimensional space. Similarly, the shape, size, position data, etc. of the virtual screen model in the virtual three-dimensional space are consistent with the shape, size, position data, etc. of the physical screen in the physical three-dimensional space.
In some embodiments, in order to finally implement virtual shooting, the data processing program running on the computing device may create the virtual camera corresponding to the entity camera in the virtual three-dimensional space, and specifically may initially create the virtual camera corresponding to the entity camera in the virtual three-dimensional space. At this time, the internal parameters of the virtual camera created by initialization should be consistent with those of the physical camera, but the external parameters of the virtual camera may be random data, and then the external parameters of the virtual camera may be adjusted to be consistent with those of the physical camera.
When the external parameters of the virtual camera are adjusted, the relative position relationship between the entity camera and the entity screen in the entity three-dimensional space can be determined. Subsequently, the position of the virtual camera in the virtual three-dimensional space may be adjusted to a position corresponding to the virtual screen model (may be referred to as a target position), wherein the relative positional relationship between the target position and the virtual screen model is consistent with the relative positional relationship between the physical camera and the physical screen. In practical applications, the relative positional relationship between the physical camera and the physical screen may include a position of a center point of the physical camera relative to the physical screen, and also a rotation angle of a lens of the physical camera relative to the physical screen. In this way, the external parameters of the virtual camera can be kept consistent with the external parameters of the physical camera.
The method for adjusting the virtual camera can be applied to a data processing program running on the computing device, and specifically comprises the following steps:
step 201: acquiring adjustment data corresponding to the entity camera; wherein the adjustment data includes focus data.
In this embodiment, the data processing program running on the computing device may acquire adjustment data corresponding to the physical camera. The adjustment data may specifically include focusing data.
The focus data may be used to cause the physical camera to focus. Focusing refers to a process of adjusting a lens of a camera to adjust a focus of the camera to a proper position so that a photographed subject is clearly visible in a photographed image. When the focus of the camera is located on the photographed subject, the subject portion of the photographed image may exhibit a clear and sharp effect, with clear and discernable details.
The adjustment data in step 201 may be obtained by a wireless focus follower, specific examples of which are described later.
In some embodiments, the adjustment data may further include aperture data.
The aperture data may be used to cause the physical camera to make an aperture adjustment. The aperture refers to the portion of the lens of the camera that controls the aperture through which light passes. The camera is composed of a series of adjustable blades, and the quantity of light entering the camera is controlled by adjusting the opening and closing degree of the blades. Adjusting the size of the aperture affects the depth and exposure of the image. The larger aperture can generate a shallow depth of field effect, so that the photographed object is clear, and the background is blurred. The smaller aperture can generate larger depth of field, so that the photographed object and the background can be kept clear.
Step 202: and determining the position of a first focus after focusing by the entity camera based on the focusing data, and determining the relative position relationship between the first focus and the entity screen.
In this embodiment, after the adjustment data is acquired, the physical camera may not perform adjustment of the data related to shooting based on the adjustment data, for example: focusing is performed based on focusing data included in the adjustment data. Instead, a data processing program running on the computing device may first determine a position of a focal point (which may be referred to as a first focal point) of the physical camera after focusing based on the focusing data in the physical three-dimensional space, and after determining the position of the first focal point in the physical three-dimensional space, further determine a relative positional relationship between the first focal point in the physical three-dimensional space and the physical screen, so as to adjust a position of the focal point of the virtual camera according to the relative positional relationship.
In some embodiments, the physical camera may have a tracking system onboard.
The tracking system can be realized based on a combination of software and hardware. For example, the tracking system may include a distance sensor, a steering angle sensor, and a computing chip, where the distance sensor and the steering angle sensor collect motion data of the physical camera, and then the computing chip calculates, based on the collected motion data, an external parameter of the physical camera, and sends the external parameter of the physical camera to a data processing program running on the computing device. Of course, the calculation chip may collect the collected motion data and send the collected motion data to the data processing program directly, and the data processing program calculates the motion data to obtain the external parameters of the entity camera.
When predicting the position of the first focus after focusing by the entity camera based on the focusing data, the entity camera is provided with the tracking system, so that the external parameters of the entity camera can be acquired through the tracking system, the initial position of the focus of the entity camera in the entity three-dimensional space can be calculated according to the external parameters and the internal parameters of the entity camera, and the position of the first focus after focusing by the entity camera based on the focusing data in the entity three-dimensional space can be calculated based on the focusing data.
Accordingly, when determining the relative positional relationship between the first focal point of the physical camera and the physical screen, position data of the physical screen in the physical three-dimensional space may be acquired, for example: the position data sent by the positioning system carried on the entity screen can be obtained and used as the position data of the entity screen in the entity three-dimensional space, and the relative position relation between the first focus and the entity screen is determined based on the calculated position of the first focus in the entity three-dimensional space and the obtained position of the entity screen in the entity three-dimensional space.
Referring to fig. 3, fig. 3 shows a schematic diagram of an entity camera and an entity screen according to an embodiment of the present disclosure.
In practical applications, the display surface of the screen may be the front surface of the screen, and the non-display surface of the screen may be the back surface of the screen. As shown in fig. 3, if the focus of the physical camera is located in front of the physical screen, the focus of the physical camera can be considered to be located in front of the physical screen; if the focus of the physical camera is located on the front of the screen, the focus of the physical camera may be considered to be located on the physical screen; if the focus of the physical camera is behind the back of the physical screen, the focus of the physical camera may be considered behind the physical screen.
In some embodiments, in determining the relative positional relationship between the first focal point of the physical camera and the physical screen, a distance (may be referred to as a first distance) between the first focal point of the physical camera and the physical camera (may be referred to as a lens of the physical camera) in the physical three-dimensional space may be acquired first, and a distance (may be referred to as a second distance) between the virtual camera (may be referred to as a lens of the virtual camera) and the virtual screen model in the virtual three-dimensional space may be calculated. The positions of the virtual camera and the virtual screen model in the virtual three-dimensional space correspond to the positions of the physical camera and the physical screen in the physical three-dimensional space, and thus the distance between the virtual camera and the virtual screen model can be regarded as the distance between the physical camera and the physical screen. Thus, the relative positional relationship between the first focus and the entity screen in the entity three-dimensional space can be determined by comparing the magnitude relationship between the first distance and the second distance.
Subsequently, whether the first distance is smaller than the second distance may be compared, and if the first distance is smaller than the second distance, it may be determined that the first focus is located in front of the physical screen; if the first distance is equal to the second distance, the first focus can be determined to be positioned on the entity screen; if the first distance is greater than the second distance, it may be determined that the first focus is located behind the physical screen.
With continued reference to fig. 3, if the first focal point is located in front of the physical screen, it is indicated that the first distance is smaller than the second distance, and the first focal point may be considered to be located in front of the physical screen; if the first focus is located on the front surface of the physical screen, indicating that the first distance is equal to the second distance, the first focus may be considered to be located on the physical screen; if the first focus is located behind the back of the physical screen, indicating that the first distance is greater than the second distance, the first focus may be considered to be behind the physical screen.
In some embodiments, in calculating the second distance between the virtual camera and the virtual screen model, a view cone model corresponding to the virtual camera may be specifically established first.
A view cone model is a model used in computer graphics to represent a three-dimensional scene. It is achieved by dividing the scene space into two parts: a view body and a clipping body.
A view volume refers to the geometry of the visible portion that extends into the scene from the point of view of the camera or viewer. It is usually a tetrahedron or hexahedron with a near clipping plane and a far clipping plane, and vertices determined by the viewpoint position and viewing direction. The view volume defines a virtual two-dimensional projection plane onto which the final two-dimensional image is generated by mapping the objects in space.
The cropper refers to the area between the view volume and the screen or viewport. It is defined by applying a cropping operation, cropping the objects inside the view volume into the parts that are eventually visible on the screen. Clipping operations may include culling objects that are out of view, culling occluded objects, performing perspective transformations, and so forth.
Therefore, the view angle range of the virtual camera, that is, the portion of the scene that can be photographed by the virtual camera can be determined by the view cone model corresponding to the virtual camera.
The virtual screen model may be a three-dimensional model rendered in the virtual three-dimensional space, and the three-dimensional model is generally composed of elements such as vertices, faces, edges, and the like, and includes attributes such as normal vectors, vertex colors, and the like.
Wherein vertices are the basic building elements of the three-dimensional model, which define the position information in the three-dimensional model. Typically, each vertex has three-dimensional coordinates, i.e., a position in three-dimensional space. A face is a polygon, typically a triangle or a quadrilateral, made up of adjacent vertices. The faces define the surface geometry of the model. By connecting vertices, planar geometries, such as cubes, spheres, etc., can be formed. Each face is made up of indices specifying vertices, which define the vertices that the face contains. An edge is a connecting line between two adjacent vertices. The boundary lines are boundary portions in the model, which define the edge shape of the model.
In practical applications, if the physical screen is an LED screen, the vertices of the virtual screen model may be four vertices of the LED box (typically square) that makes up the LED screen on the front of the screen.
In this case, it may be determined whether or not each vertex of the virtual screen model is located within a viewing angle range corresponding to the view cone model. If there is at least one vertex located within the view angle range corresponding to the view cone model, the distance between the at least one vertex and the virtual camera (in particular, the lens of the virtual camera) may be determined, and an average distance may be calculated, so that the average distance may be taken as the second distance between the virtual camera and the virtual screen model.
In some embodiments, instead of directly determining the relative positional relationship between the first focal point of the physical camera and the physical screen, the relative positional relationship between the depth of field point of the physical camera and the physical screen may be determined. It should be noted that, in the present disclosure, the distance between the position of the front depth of field of the physical camera and the position of the focal point of the physical camera is the front depth of field of the physical camera.
The depth of field refers to a range in the photograph that is considered to be clear, i.e., a focus range of an object in the image in the front-rear direction. In practical applications, the focus range in front of the object in the image is the front depth of field, that is, the front depth of field refers to the range in which the object can be kept clear in an image captured by the camera, starting from the focus aligned by the lens of the camera and extending forward. The focus range behind the object in the image is the rear depth of field, namely the rear depth of field refers to the range in which the object can keep clear in an image shot by the camera and extends backwards from the focus of the lens alignment of the camera.
Depth of field is often used to control the sharpness and spatial perception of an image. The depth of field is affected by a number of factors, the most predominant of which are aperture, focal length and shooting distance. A larger aperture, a longer focal length, and a closer shooting distance generally produce a shallow depth of field effect, i.e., only a small portion of the object to be shot is in a clear range, and other areas are blurred. The smaller aperture, the shorter focal length and the longer shooting distance can generate larger depth of field, so that the shot object and the surrounding area can be kept clear.
That is, when determining the relative positional relationship between the first focal point of the physical camera and the physical screen, specifically, a front depth of field after the physical camera focuses based on the focusing data and performs aperture adjustment based on the aperture data may be determined first, and a position of the front depth of field point may be determined based on the position of the first focal point and the front depth of field.
Then, the relative position relationship between the foreground depth point and the entity screen can be determined as the relative position relationship between the first focus and the entity screen.
Referring to fig. 4, fig. 4 shows a schematic diagram of another physical camera and physical screen according to an embodiment of the present disclosure.
As shown in fig. 4, the front depth of field of the physical camera is located before the focal point of the physical camera, and the distance between the position of the front depth of field and the position of the focal point is the front depth of field of the physical camera; the depth of field of the entity camera is located behind the focus of the entity camera, and the distance between the position of the depth of field point and the position of the focus is the depth of field of the entity camera.
In this case, if the front depth of field of the physical camera is located in front of the physical screen, the focus of the physical camera can be considered to be located in front of the physical screen; if the front depth of field of the entity camera is located on the front of the screen, the focus of the entity camera can be considered to be located on the entity screen; if the front depth point of the physical camera is behind the back of the physical screen, the focus of the physical camera may be considered behind the physical screen.
In this case, since the distance between the front depth point and the physical screen is greater than the distance between the actual first focus and the physical screen, the adjustment range of the focus of the physical camera is extended, that is, when the front depth point is located in front of the physical screen, the focus may be located at a distance behind the screen, and this distance is less than or equal to the length of the front depth point, and such focusing effect is generally acceptable, which is equivalent to properly extending the adjustment range of the focus of the physical camera toward behind the screen.
Step 203: and if the first focus is positioned in front of the entity screen, adjusting the position of the second focus of the virtual camera to the virtual screen model, and sending the adjustment data to the entity camera so as to focus by the entity camera based on the focusing data.
In this embodiment, if it is determined that the first focal point of the physical camera is located before the physical screen, the data processing program running on the computing device may adjust a position of a focal point (which may be referred to as a second focal point) of the virtual camera in the virtual three-dimensional space, so that a position of the second focal point in the virtual three-dimensional space falls on the virtual screen model. Similarly to the above-described relative positional relationship between the first focus in the physical three-dimensional space and the physical screen, if the second focus is located on the front face of the virtual screen model in the virtual three-dimensional space, the second focus can be considered to be located on the virtual screen model.
In addition, if it is determined that the first focal point of the physical camera is located before the physical screen, the adjustment data may be sent to the physical camera, so that the physical camera may perform adjustment of data related to photographing based on the adjustment data, for example: focusing is performed based on the focusing data included in the adjustment data, and aperture adjustment is performed based on the aperture data included in the adjustment data.
By adjusting the position of the second focus of the virtual camera, the virtual focus effect of the image shot by the virtual camera can be reduced to a certain extent, so that the negative influence caused by the problem of secondary virtual focus is reduced. Thus, the aperture of the virtual camera can be adjusted directly to coincide with the aperture of the physical camera described above. And after setting data such as a focus and an aperture for the virtual camera, which are related to photographing, the virtual camera is enabled to perform an analog photographing operation.
In general, the virtual focus occurs not only in the case of inaccurate focusing of the lens of the camera but also in the case of insufficient light entering the camera, and thus the aperture also affects the virtual focus effect of the photographing of the camera.
In order to avoid problems with physical camera shots due to secondary virtual focus, the data processing program running on the computing device may calculate an available value for the aperture of the virtual camera. When the aperture of the virtual camera adopts the numerical value, for the image obtained by simulating the virtual scene by the virtual camera displayed on the physical screen, the physical camera shoots the real scene and the display image on the physical screen, and the virtual focus effect of the real scene part and the virtual scene part in the obtained image is continuously changed, so that the real scene part and the virtual scene part in the image shot by the physical camera have a correct hierarchical relationship. In practical applications, the continuous change of the virtual focus effect refers to that, in an image captured by a camera, from a focus point in which a lens of the camera is aligned, a scene at a position further from the focus point is blurred, and a scene at a position closer to the focus point is clear.
In some embodiments, when adjusting the position of the second focus of the virtual camera to the virtual screen model, the position of the second focus in the virtual three-dimensional space may be specifically adjusted to the virtual screen model based on model data of the virtual screen model and space data of the virtual three-dimensional space. For example, the space data of the virtual three-dimensional space may include a three-dimensional coordinate system corresponding to the virtual three-dimensional space, and the model data of the virtual screen model may include three-dimensional coordinates of each vertex of the virtual screen model in the three-dimensional coordinate system, and based on this, the position of the second focus in the virtual three-dimensional space may be adjusted by adjusting the three-dimensional coordinates for representing the position of the second focus, thereby realizing adjustment of the position of the second focus in the virtual three-dimensional space onto the virtual screen model.
The position of the second focus may be directly changed as one variable parameter of the virtual camera, or may be changed by changing other parameters of the virtual camera (e.g., lens position) or the like, depending on the model setting manner of the virtual camera, which is not limited by the present disclosure.
In some embodiments, if it is determined that the first focal point of the physical camera is not located before the physical screen, it indicates that the focal point of the lens of the physical camera deviates from the physical screen and the actor performing in front of the physical screen, and no clear portion exists in the image captured by the physical camera, and all the image is blurred or distorted. This is generally considered to be abnormal at the time of photographing.
In this case, the data processing program running on the computing device may adjust the position of the second focus of the virtual camera in the virtual three-dimensional space to a position (which may be referred to as a target position) corresponding to a virtual screen model in the virtual three-dimensional space, wherein a relative positional relationship between the target position and the virtual screen model is kept identical to a relative positional relationship between the first focus of the physical camera and the physical screen.
It should be noted that, if it is determined that the first focal point of the physical camera is not located before the physical screen, the adjustment data need not be sent to the physical camera, that is, the physical camera may not perform adjustment of the data related to shooting based on the adjustment data.
In some embodiments, if it is determined that the first focus of the physical camera is not located in front of the physical screen, a data processing program running on the computing device may output a prompt to a user that the focus of the physical camera is out of range.
In some embodiments, the user may use a wireless focus follower to adjust the shooting-related data of the physical camera.
A wireless focus follower is a device for remotely controlling the focus of a lens, which communicates with a camera through wireless signal transmission. Wireless focus trackers are typically composed of two parts, one being a transmitter (handheld device) and the other being a receiver (mounted on a camera). The user sends a focusing instruction through a handheld transmitter. The transmitter converts the focus instruction into a wireless signal and transmits the signal to the receiver by wireless communication means (e.g., radio waves or infrared rays). After receiving the wireless signal, the receiver converts the wireless signal into an identifiable instruction and transmits the identifiable instruction to the camera. The camera adjusts the distance between the lens and the image sensor by controlling a focusing mechanism (such as an electric motor) inside the lens according to the received command, thereby realizing focusing effect, and adjusting the aperture by adjusting the opening and closing degree of the adjustable blades.
That is, the transmitter of the wireless focus follower corresponding to the physical camera may establish a wireless connection (may be referred to as a first wireless connection) with the computing device; the receiver of the wireless focus follower mounted on the physical camera may also establish a wireless connection (which may be referred to as a second wireless connection) with the computing device.
When acquiring the adjustment data corresponding to the physical camera, the adjustment data corresponding to the physical camera transmitted by the transmitter through the first wireless connection may be specifically acquired.
When the adjustment data is transmitted to the entity camera to adjust the data related to shooting based on the adjustment data by the entity camera, the adjustment data may be specifically transmitted to the receiver through the second wireless connection to control the entity camera to adjust the data related to shooting based on the adjustment data by the receiver.
In one exemplary application scenario, the wireless focus follower may be used to determine the current focus data and aperture data (i.e., the adjustment data in step 201) of the physical camera, which is sent to the computing device, with no adjustments made to the physical camera at all. The computing device determines a relative positional relationship between the first focus and the physical screen in step 202 according to the adjustment data. If the first focus is located before the physical screen, step 203 is executed to adjust the position of the second focus of the virtual camera to the virtual screen model, to avoid the virtual camera shooting the virtual focus, and then the adjustment data in step 201 is sent to the physical camera to adjust the physical camera. If the first focus is not located in front of the entity screen, the position of the second focus of the virtual camera is adjusted to the target position only, so that the virtual camera is consistent with the entity camera, the entity camera is not adjusted according to the adjustment data, and a prompt that the focus of the entity camera is out of range is output to a user.
In the above technical solution, a virtual three-dimensional space corresponding to an entity three-dimensional space may be simulated on a computing device, and a virtual camera corresponding to an entity camera and a virtual screen model corresponding to an entity screen may be created in the virtual three-dimensional space, where the virtual camera may be used to perform simulated shooting on a preset virtual scene, and display an image obtained by simulated shooting through the entity screen, and the entity camera may be used to perform shooting on the entity screen; when the data processing program on the computing device obtains the adjustment data corresponding to the entity camera and containing focusing data, the data processing program can determine the position of the first focus after the entity camera focuses based on the focusing data, determine the relative position relationship between the first focus and the entity screen, adjust the position of the second focus of the virtual camera to the virtual screen model when the first focus is determined to be positioned in front of the entity screen, and send the adjustment data to the entity camera so as to focus based on the focusing data by the entity camera.
In this way, when virtual shooting is realized, on one hand, the position of the focus of the virtual camera corresponding to the entity camera can be automatically adjusted according to the position of the focus of the current entity camera, so as to reduce the virtual focus effect of the image shot by the virtual camera, avoid the problem of secondary virtual focus, and avoid the problem of manually setting an offset distance for the focus of the virtual camera, therefore, the adjustment efficiency and accuracy are higher, the method is suitable for scenes where the entity camera moves in a large range, and can adapt to screens with various shapes because the position of the focus of the virtual camera can be automatically adjusted; on the other hand, when the focus of the entity camera needs to be controlled to move in front of the entity screen, the adjustment mode of the focus of the virtual camera can be determined according to the predicted relative position relation between the focus of the entity camera and the entity screen, whether the position of the focus of the entity camera is correspondingly adjusted or not is judged, and when the entity camera focuses and shoots according to adjustment data, the virtual camera is ensured to adjust the second focus position to the virtual screen model, so that the picture displayed in the screen is clear, the virtual focus effect of the image shot by the virtual camera can be reduced, and negative influence caused by the problem of secondary virtual focus is reduced.
Corresponding to the foregoing embodiments of the method for adjusting a virtual camera, the present disclosure also provides embodiments of an adjusting device for a virtual camera.
Referring to fig. 5, fig. 5 is a schematic structural view of an apparatus according to an exemplary embodiment of the present disclosure. At the hardware level, the device comprises a processor 501, an internal bus 502, a network interface 503, a memory 504 and a non-volatile storage 505, although other hardware may be required. One or more embodiments of the present disclosure may be implemented in a software-based manner, such as by the processor 501 reading a corresponding computer program from the non-volatile storage 505 into the memory 504 and then running. Of course, in addition to software implementation, one or more embodiments of the present disclosure do not exclude other implementation manners, such as a logic device or a combination of software and hardware, etc., that is, the execution subject of the following processing flows is not limited to each logic module, but may also be hardware or a logic device.
Referring to fig. 6, fig. 6 is a block diagram of an adjusting apparatus of a virtual camera according to an exemplary embodiment of the present disclosure.
The adjustment device of the virtual camera can be applied to the device shown in fig. 5 to realize the technical scheme of the disclosure. Wherein the adjustment means of the virtual camera is applicable to a data processing program running on the computing device; simulating a virtual three-dimensional space corresponding to the physical three-dimensional space on the computing equipment; creating a virtual camera corresponding to the entity camera in the virtual three-dimensional space; the virtual camera is used for performing simulated shooting on a preset virtual scene and displaying a simulated shot image through the entity screen; the entity camera is used for shooting the entity screen; a virtual screen model corresponding to the entity screen is also created in the virtual three-dimensional space; the device comprises:
An obtaining module 601, configured to obtain adjustment data corresponding to the entity camera; wherein the adjustment data includes focus data;
a determining module 602, configured to determine a position of a first focal point of the entity camera after focusing based on the focusing data, and determine a relative positional relationship between the first focal point and the entity screen;
and the adjusting module 603 is configured to adjust, when the first focal point is located before the physical screen, a position of the second focal point of the virtual camera to the virtual screen model, and send the adjustment data to the physical camera, so that the physical camera focuses based on the focusing data.
Optionally, the adjusting module 603 is further configured to:
if the first focus is not located in front of the physical screen, adjusting the position of the second focus to a target position corresponding to the virtual screen model; and the relative position relation between the target position and the virtual screen model is consistent with the relative position relation between the first focus and the entity screen.
Optionally, the adjustment data further includes aperture data;
The sending the adjustment data to the entity camera to focus by the entity camera based on the focusing data includes:
and sending the adjustment data to the entity camera so that the entity camera can focus based on the focusing data and can perform aperture adjustment based on the aperture data.
Optionally, the determining the relative positional relationship between the first focus and the entity screen includes:
determining a front depth of field after the entity camera focuses based on the focusing data and performs aperture adjustment based on the aperture data, and determining a position of a front depth of field point based on the position of the first focus and the front depth of field; wherein the distance between the position of the front depth of field point and the position of the first focus is the front depth of field;
and determining the relative position relation between the front depth point and the entity screen as the relative position relation between the first focus and the entity screen.
Optionally, the entity camera is provided with a tracking system;
the determining the position of the first focal point after the entity camera focuses based on the focusing data includes:
Acquiring external parameters of the entity camera sent by the tracking system;
and determining the position of a first focus of the entity camera after focusing based on the focusing data based on the external parameters of the entity camera and the focusing data.
Optionally, the adjusting the position of the second focus of the virtual camera onto the virtual screen model includes:
and adjusting the position of the second focus of the virtual camera to the virtual screen model based on the model data of the virtual screen model and the space data of the virtual three-dimensional space.
Optionally, the apparatus further comprises:
and the output module is used for outputting a prompt that the focus of the entity camera is out of range to a user when the first focus is not positioned in front of the entity screen.
Optionally, a transmitter of a wireless focus follower corresponding to the physical camera establishes a first wireless connection with the computing device; the receiver of the wireless focus follower is carried on the entity camera; the receiver establishes a second wireless connection with the computing device;
the obtaining adjustment data corresponding to the entity camera includes:
Acquiring adjustment data corresponding to the entity camera, which is transmitted by the transmitter through the first wireless connection;
the sending the adjustment data to the entity camera to focus by the entity camera based on the focusing data includes:
and sending the adjustment data to the receiver through the second wireless connection so that the receiver controls the entity camera to focus based on the focusing data.
Optionally, the external parameters include position data and attitude data.
For the device embodiments, they essentially correspond to the method embodiments, so that reference is made to the description of the method embodiments for relevant points. The apparatus embodiments described above are merely illustrative, wherein the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the technical solution of the disclosure.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing has described certain embodiments of the present disclosure. Other embodiments are within the scope of the present disclosure. In some cases, the acts or steps recited in the present disclosure may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The terminology used in the one or more embodiments of the disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the disclosure. The singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The term "and/or" refers to and encompasses any or all possible combinations of one or more of the associated listed items.
The description of the terms "one embodiment," "some embodiments," "example," "specific example," or "one implementation" and the like as used in connection with one or more embodiments of the present disclosure means that a particular feature or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. The schematic descriptions of these terms are not necessarily directed to the same embodiment. Furthermore, the particular features or characteristics described may be combined in any suitable manner in one or more embodiments of the disclosure. Furthermore, different embodiments, as well as specific features or characteristics of different embodiments, may be combined without contradiction.
It should be understood that while the terms first, second, third, etc. may be used in one or more embodiments of the present disclosure to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination", depending on the context.
The foregoing description of the preferred embodiment(s) of the present disclosure is merely intended to illustrate the embodiment(s) of the present disclosure, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the embodiment(s) of the present disclosure are intended to be included within the scope of the present disclosure.
User information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in this disclosure are both user-authorized or fully authorized information and data by parties, and the collection, use and processing of relevant data requires compliance with relevant laws and regulations and standards of relevant countries and regions, and is provided with corresponding operation portals for user selection of authorization or denial.

Claims (11)

1. A method for adjusting a virtual camera is applied to a data processing program running on a computing device; simulating a virtual three-dimensional space corresponding to the physical three-dimensional space on the computing equipment; creating a virtual camera corresponding to the entity camera in the virtual three-dimensional space; the virtual camera is used for performing simulated shooting on a preset virtual scene and displaying a simulated shot image through the entity screen; the entity camera is used for shooting the entity screen; a virtual screen model corresponding to the entity screen is also created in the virtual three-dimensional space; the method comprises the following steps:
Acquiring adjustment data corresponding to the entity camera; wherein the adjustment data includes focus data;
determining the position of a first focus after focusing by the entity camera based on the focusing data, and determining the relative position relation between the first focus and the entity screen;
and if the first focus is positioned in front of the entity screen, adjusting the position of the second focus of the virtual camera to the virtual screen model, and sending the adjustment data to the entity camera so as to focus by the entity camera based on the focusing data.
2. The method of claim 1, the method further comprising:
if the first focus is not located in front of the physical screen, adjusting the position of the second focus to a target position corresponding to the virtual screen model; and the relative position relation between the target position and the virtual screen model is consistent with the relative position relation between the first focus and the entity screen.
3. The method of claim 1, the adjustment data further comprising aperture data;
the sending the adjustment data to the entity camera to focus by the entity camera based on the focusing data includes:
And sending the adjustment data to the entity camera so that the entity camera can focus based on the focusing data and can perform aperture adjustment based on the aperture data.
4. A method according to claim 3, said determining a relative positional relationship between the first focus and an entity screen comprising:
determining a front depth of field after the entity camera focuses based on the focusing data and performs aperture adjustment based on the aperture data, and determining a position of a front depth of field point based on the position of the first focus and the front depth of field; wherein the distance between the position of the front depth of field point and the position of the first focus is the front depth of field;
and determining the relative position relation between the front depth point and the entity screen as the relative position relation between the first focus and the entity screen.
5. The method of claim 1, the physical camera having a tracking system onboard;
the determining the position of the first focal point after the entity camera focuses based on the focusing data includes:
acquiring external parameters of the entity camera sent by the tracking system;
And determining the position of a first focus of the entity camera after focusing based on the focusing data based on the external parameters of the entity camera and the focusing data.
6. The method of claim 1, the adjusting the position of the second focus of the virtual camera onto the virtual screen model, comprising:
and adjusting the position of the second focus of the virtual camera to the virtual screen model based on the model data of the virtual screen model and the space data of the virtual three-dimensional space.
7. The method of claim 1, the method further comprising:
and if the first focus is not positioned in front of the entity screen, outputting a prompt that the focus of the entity camera is out of range to a user.
8. The method of claim 1, a transmitter of a wireless focus follower corresponding to the physical camera establishing a first wireless connection with the computing device; the receiver of the wireless focus follower is carried on the entity camera; the receiver establishes a second wireless connection with the computing device;
the obtaining adjustment data corresponding to the entity camera includes:
Acquiring adjustment data corresponding to the entity camera, which is transmitted by the transmitter through the first wireless connection;
the sending the adjustment data to the entity camera to focus by the entity camera based on the focusing data includes:
and sending the adjustment data to the receiver through the second wireless connection so that the receiver controls the entity camera to focus based on the focusing data.
9. An adjusting device of a virtual camera is applied to a data processing program running on a computing device; simulating a virtual three-dimensional space corresponding to the physical three-dimensional space on the computing equipment; creating a virtual camera corresponding to the entity camera in the virtual three-dimensional space; the virtual camera is used for performing simulated shooting on a preset virtual scene and displaying a simulated shot image through the entity screen; the entity camera is used for shooting the entity screen; a virtual screen model corresponding to the entity screen is also created in the virtual three-dimensional space; the device comprises:
the acquisition module is used for acquiring adjustment data corresponding to the entity camera; wherein the adjustment data includes focus data;
The determining module is used for determining the position of a first focus after the entity camera focuses based on the focusing data and determining the relative position relation between the first focus and the entity screen;
and the adjusting module is used for adjusting the position of the second focus of the virtual camera to the virtual screen model when the first focus is positioned in front of the entity screen, and sending the adjusting data to the entity camera so as to focus by the entity camera based on the focusing data.
10. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the method of any one of claims 1 to 8 by executing the executable instructions.
11. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the method of any of claims 1 to 8.
CN202311450722.2A 2023-11-01 2023-11-01 Adjustment method and device for virtual camera Pending CN117528237A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311450722.2A CN117528237A (en) 2023-11-01 2023-11-01 Adjustment method and device for virtual camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311450722.2A CN117528237A (en) 2023-11-01 2023-11-01 Adjustment method and device for virtual camera

Publications (1)

Publication Number Publication Date
CN117528237A true CN117528237A (en) 2024-02-06

Family

ID=89750547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311450722.2A Pending CN117528237A (en) 2023-11-01 2023-11-01 Adjustment method and device for virtual camera

Country Status (1)

Country Link
CN (1) CN117528237A (en)

Similar Documents

Publication Publication Date Title
US10425638B2 (en) Equipment and method for promptly performing calibration and verification of intrinsic and extrinsic parameters of a plurality of image capturing elements installed on electronic device
EP3134868B1 (en) Generation and use of a 3d radon image
WO2018201809A1 (en) Double cameras-based image processing device and method
US9071827B1 (en) Method and system for automatic 3-D image creation
CN108432230B (en) Imaging device and method for displaying an image of a scene
US20220215568A1 (en) Depth Determination for Images Captured with a Moving Camera and Representing Moving Features
TW201709718A (en) Method and apparatus for displaying a light field based image on a user's device, and corresponding computer program product
US9781412B2 (en) Calibration methods for thick lens model
CN109409147A (en) A kind of bar code recognition and device
CN102158648A (en) Image capturing device and image processing method
JP7378219B2 (en) Imaging device, image processing device, control method, and program
US8072487B2 (en) Picture processing apparatus, picture recording apparatus, method and program thereof
CN110278366B (en) Panoramic image blurring method, terminal and computer readable storage medium
US10154241B2 (en) Depth map based perspective correction in digital photos
CN114363522A (en) Photographing method and related device
CN113870213A (en) Image display method, image display device, storage medium, and electronic apparatus
US20230328400A1 (en) Auxiliary focusing method, apparatus, and system
WO2021145913A1 (en) Estimating depth based on iris size
CN110892706B (en) Method for displaying content derived from light field data on a 2D display device
CN116456191A (en) Image generation method, device, equipment and computer readable storage medium
CN112804450B (en) Camera starting method, electronic equipment and computer storage medium
CN117528237A (en) Adjustment method and device for virtual camera
CN113747011B (en) Auxiliary shooting method and device, electronic equipment and medium
CN106162149B (en) A kind of method and mobile terminal shooting 3D photo
CN115393182A (en) Image processing method, device, processor, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination