CN115442542A - Method and device for splitting mirror - Google Patents

Method and device for splitting mirror Download PDF

Info

Publication number
CN115442542A
CN115442542A CN202211396457.XA CN202211396457A CN115442542A CN 115442542 A CN115442542 A CN 115442542A CN 202211396457 A CN202211396457 A CN 202211396457A CN 115442542 A CN115442542 A CN 115442542A
Authority
CN
China
Prior art keywords
virtual
camera
scene
bitmap
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211396457.XA
Other languages
Chinese (zh)
Other versions
CN115442542B (en
Inventor
任志忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tiantu Wanjing Technology Co ltd
Original Assignee
Beijing Tiantu Wanjing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tiantu Wanjing Technology Co ltd filed Critical Beijing Tiantu Wanjing Technology Co ltd
Priority to CN202211396457.XA priority Critical patent/CN115442542B/en
Publication of CN115442542A publication Critical patent/CN115442542A/en
Application granted granted Critical
Publication of CN115442542B publication Critical patent/CN115442542B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/268Signal distribution or switching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides a method and a device for splitting a mirror, which are used for audio-visual shooting and belong to the field of film and television shooting. The mirror splitting method comprises the following steps: setting a virtual scene, wherein the virtual scene is a 3D scene, at least one virtual camera is arranged in the virtual scene, the virtual camera is consistent with the parameters of an actual camera which is actually shot, and the virtual shooting parameters of the virtual camera are set according to the key frame; determining an actual camera bitmap according to the virtual shooting parameters, so that the virtual camera and the actual camera have the same shooting visual angle, and the actual camera bitmap and a real shot camera bitmap have the same proportion size; the virtual photographing parameters include at least one of a set-top information, a photographing scene, a photographing mode, and scheduling information. The lens splitting method not only reduces resource waste in shooting, reduces shooting cost, but also improves working efficiency of workers.

Description

Method and device for splitting mirror
Technical Field
The invention relates to the field of film and television shooting, in particular to a method and a device for splitting a lens.
Background
The lens-separating work is the necessary preliminary preparation work for creating the film and is used as the basis for realizing the cooperation of division of labor, and the lens-separating script has a great degree of relation with the film-forming quality. However, in the existing movie production, the split-mirror preview and the real-time camera bitmap are difficult to synchronize, and in addition, different understanding exists in the split-lens script by workers, so that the shooting idea of the director is difficult to be synchronously and intuitively displayed in the shooting site, which greatly affects the shooting efficiency and the shooting quality.
Disclosure of Invention
The embodiment of the invention aims to provide a mirror splitting method and a mirror splitting device.
In order to achieve the above object, an embodiment of the present invention provides a mirror splitting method and apparatus, where the mirror splitting method is used for audiovisual shooting, and includes: setting a virtual scene, wherein the virtual scene is a 3D scene, at least one virtual camera is arranged in the virtual scene, and the virtual camera has the same parameters with an actual camera which is actually shot; setting virtual shooting parameters of the virtual camera according to the key frame; determining an actual camera bitmap according to the virtual shooting parameters, so that the virtual camera and the actual camera have the same shooting visual angle, and the actual camera bitmap and a real shot camera bitmap have the same proportion size; the actual camera bitmap comprises an actual shooting scene and a camera position dynamic line and is used for guiding the shooting of the actual scene; the virtual photographing parameters include at least one of a set-top information, a photographing scene, a photographing mode, and scheduling information.
Optionally, the determining of the actual camera bitmap according to the virtual shooting parameters includes that initial coordinate information of the virtual camera is consistent with that of an actual camera in the actual camera bitmap; and when the virtual shooting parameters are adjusted, the actual camera bitmap changes along with the virtual shooting parameters in an equal proportion.
Optionally, when the virtual shooting parameters of the virtual camera are set in real time, the actual camera bitmap is continuous; and when the virtual shooting parameters of the virtual camera are set discontinuously, supplementing the virtual shooting parameters in the discontinuous period for obtaining a continuous machine position moving line.
Optionally, the mirror splitting method further includes: acquiring a section diagram of the actual camera bitmap, dividing the section diagram into nine equal parts to form four intersection points, wherein a rectangular range in the middle of the four intersection points is a displacement range of an actual camera and is used for ensuring that a camera position is in a visible range in real time; the obtaining of the tangent plane diagram of the actual camera bitmap includes: and calculating a three-dimensional picture of the virtual scene according to the AI module, acquiring a sectional view from the three-dimensional picture, and determining the optimal position area of the camera according to the sectional view.
Optionally, the mirror splitting method further includes: and rendering the virtual scene, and setting the trend and the strength of the light source in the virtual scene.
Optionally, the machine position information includes at least one of a machine position coordinate, a mirror moving track, a mirror moving speed, and a mirror moving mode; the mirror moving mode comprises at least one of mirror pushing, mirror pulling, mirror shaking, mirror moving and mirror following; the shooting scenes comprise at least one of a panorama, a close scene, a long scene, a medium scene and a special scene; the photographing mode includes at least one of a virtual mode, a roaming mode, a real-time mode, a sensing mode, and an LED mode; the scheduling information comprises at least one of figure scheduling information and script scheduling information; the shooting range of the virtual camera is 360 degrees; the virtual scene is also provided with at least one of a light controller, a sound effect controller and a digital character.
In another aspect, the present invention provides a mirror splitting apparatus for audio-visual shooting, including: the device comprises a setting module, a judging module and a display module, wherein the setting module is used for setting a virtual scene, the virtual scene is a 3D scene, at least one virtual camera is arranged in the virtual scene, and the virtual camera has the same parameters with an actual camera which is actually shot; the acquisition module is used for setting virtual shooting parameters of the virtual camera according to the key frame; the first processing module is used for determining an actual camera bitmap according to the virtual shooting parameters, so that the virtual camera and the actual camera have the same shooting visual angle, and the actual camera bitmap and an actually shot camera bitmap have the same proportion size; the actual camera bitmap comprises an actual shooting scene and a camera position dynamic line and is used for guiding the shooting of the actual scene; the virtual photographing parameters include at least one of a set-top information, a photographing scene, a photographing mode, and scheduling information.
Optionally, the determining of the actual camera bitmap according to the virtual shooting parameters includes that initial coordinate information of the virtual camera is consistent with that of an actual camera in the actual camera bitmap; and when the virtual shooting parameters are adjusted, the actual camera bitmap changes in equal proportion.
Optionally, the first processing module is further configured to: when the virtual shooting parameters of the virtual camera are set in real time, the actual camera bitmap is continuous; and when the virtual shooting parameters of the virtual camera are set discontinuously, supplementing the virtual shooting parameters in the discontinuous period for obtaining a continuous machine position moving line.
Optionally, the mirror splitting device further includes: the second processing module is used for dividing the actual camera bitmap into nine equal parts to form four intersection points, and the camera position is arranged on the intersection points to ensure that the camera position is in a visible range in real time; the obtaining of the tangent plane diagram of the actual camera bitmap includes: and calculating a three-dimensional picture of the virtual scene according to the AI module, acquiring a sectional view from the three-dimensional picture, and determining the optimal position area of the camera according to the sectional view.
The invention provides a lens splitting method for audio-visual shooting, which comprises the following steps: setting a virtual scene, wherein the virtual scene is a 3D scene, at least one virtual camera is arranged in the virtual scene, and the virtual camera is consistent with the parameters of an actual camera shot in reality; setting virtual shooting parameters of the virtual camera according to the key frame; determining an actual camera bitmap according to the virtual shooting parameters, so that the virtual camera and the actual camera have the same shooting visual angle, and the actual camera bitmap and a real shot camera bitmap have the same proportion size; the actual camera bitmap comprises an actual shooting scene and a camera position dynamic line and is used for guiding the shooting of the actual scene; the virtual photographing parameters include at least one of a set of information, a photographing scene, a photographing mode, and scheduling information. The method determines the actual camera bitmap according to the shot portrait and scene change, and is used for allocating the corresponding virtual camera and the corresponding camera for tracking, so that the resource waste caused by information errors is avoided, the working efficiency of employees is improved, the shooting cost is reduced, and the director and the actors are assisted to better adapt and complete the working content.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the embodiments of the invention and not to limit the embodiments of the invention. In the drawings:
FIGS. 1-2 are schematic flow diagrams of a method of splitting a mirror according to the present invention;
FIG. 3 is a flow chart of the present invention for obtaining a plan view;
FIG. 4 is a schematic view of a Sudoku plane composition according to the present invention;
FIG. 5 is a schematic diagram of an embodiment of a split mirror system of the present invention;
FIG. 6 is a table generation flow diagram for a mirror-down in the mirror segmentation method of the present invention;
FIG. 7 is a schematic representation of the electron beam splitter of the present invention;
FIG. 8 is a schematic diagram of the package processing of the present invention;
fig. 9 is a schematic diagram of the inter-phase frame loss of the data packet of the present invention.
Description of the reference numerals
1-video animation window;
2-a map preview window;
3-setting a camera window;
4-adjusting the parameter window.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration and explanation only, not limitation.
Fig. 1 is a schematic flow diagram of a mirror splitting method according to the present invention, and as shown in fig. 1, step S101 is to set a virtual scene, where the virtual scene is a 3D scene, and at least one virtual camera is arranged in the virtual scene, and the virtual camera has parameters consistent with actual cameras actually shot. Preferably, the shooting range of the virtual camera is 360 degrees. The virtual scene is also provided with at least one of a light controller, a sound effect controller and a digital character. The lens splitting method of the invention enables all creators (such as light control personnel, sound effect control personnel and the like) to enter the scene on site, and realizes the digital survey of 360 degrees without dead angles.
Step S102 is to set the virtual shooting parameters of the virtual camera according to the key frame. The key frame is an image frame corresponding to the final desired shooting effect. The virtual photographing parameters include at least one of a set-top information, a photographing scene, a photographing mode, and scheduling information. And setting the virtual shooting parameters of the virtual camera to obtain an expected key frame.
Step S103 is determining an actual camera bitmap according to the virtual shooting parameters, so that the virtual camera and the actual camera have the same shooting angle, and the actual camera bitmap and the actually shot camera bitmap have the same scale size. The actual camera bitmap comprises an actual shooting scene and a camera position dynamic line and is used for guiding the shooting of the actual scene; the virtual photographing parameters include at least one of a set-top information, a photographing scene, a photographing mode, and scheduling information.
The machine position information comprises at least one of machine position coordinates, a mirror moving track, a mirror moving speed and a mirror moving mode; the mirror moving mode comprises at least one of mirror pushing, mirror pulling, mirror shaking, mirror moving and mirror following; the shooting scenes comprise at least one of a panorama, a close scene, a long scene, a medium scene and a special scene; the photographing mode includes at least one of a virtual mode, a roaming mode, a real-time mode, a sensing mode, and an LED mode; the scheduling information includes at least one of character scheduling information and scenario scheduling information.
According to a preferred embodiment, the determining of the actual camera bitmap according to the virtual shooting parameters comprises that initial coordinate information of the virtual camera is consistent with initial coordinate information of an actual camera in the actual camera bitmap; when the virtual shooting parameters are adjusted, the actual camera position map changes proportionally, the virtual camera and the actual camera position move synchronously, and preferably, the virtual camera and the actual camera move synchronously in the same step.
According to a specific embodiment, as shown in fig. 2, a virtual camera is added in a virtual scene to mark a digital character, parameters such as a focus frame and an aperture of a shot can be adjusted when a position of the digital character under a plan view is seen in a map preview window 2, when fluctuation of the shot is selected, dynamic preview of the shot can be checked in real time, and a motion rate of the camera can be adjusted in real time to obtain different rhythm of moving the mirror. And marking information such as shooting scene, shooting method and shooting mode on the manufactured lens. And exporting an electronic split-mirror table, and rendering the manufactured dynamic video, wherein the split-mirror table displays a scene name, an index, a story board, a machine bitmap and parameter information.
The invention derives an electronic split mirror table, and renders generated dynamic video information, wherein the dynamic video information comprises character information corresponding to character roles, and the split mirror table comprises: scene name, index, story board, machine bitmap, parameter information, project information (such as production unit, producer), etc. The lens system comprises a database, wherein the database is used for storing the script decomposition rule, the lens script element, the setting rule of the lens script element and the like. And decomposing the script into at least one section of shot script according to the script decomposition rule. And setting a sub-shot script corresponding to the sub-shot script according to the setting principle of the sub-shot script and the sub-shot script element. The invention can effectively generate the shot script, thereby quickly generating the script.
Fig. 3 is a flowchart of obtaining a plan view according to the present invention, and as shown in fig. 3, the plane view is obtained by first moving a marked object of a dynamic video window, then performing a nine-grid image composition, performing AI-assisted plane calculation, and then creating an index according to the size and the row-column spacing of each object to implement a nine-grid algorithm; and finally, acquiring a machine bitmap and a thumbnail of the dynamic video window.
Specifically, when the virtual shooting parameters of the virtual camera are set in real time, the actual camera bitmap is continuous; and when the virtual shooting parameters of the virtual camera are set discontinuously, supplementing the virtual shooting parameters in the discontinuous period for obtaining a continuous machine position moving line.
And supplementing the virtual shooting parameters in the discontinuous period by an AI module to supplement the lost frames. According to a preferred embodiment, the marked objects in the dynamic video correspond to the moving parts of the machine bitmap and are transmitted by means of AI assistance. Machine learning algorithms learn a correlation model between the motion characteristics of the camera and the output variables. Specifically, all movement information is stored in the database, and when the marked object moves, the AI module generates a list of all characteristics of the movement information, as well as the route of the movement (output variables).
The AI module in the invention is an intelligent module based on multi-data perception, which relates to computer vision, machine learning and other AI technologies, and intelligently and automatically processes a file packet by using the multi-data perception-based module, wherein the file packet is an image data packet. The intelligence identifies the sequence number, composition, character of each frame of the digital actor captured and gives recommendations including guidance recommendations for various aspects of the scene and the optimal location of the intelligent guide positions. The machine bitmap is generated at one time instead of being drawn after being split-mirrored. Specifically, the path of the camera motion can be predicted using the previous computational model based on all motion information features in the database. The AI module trains the samples in the database,
the neuron model is used for performing accumulation integration on all input signals to obtain the membrane potential in the biological neuron, wherein the value of the membrane potential is
Figure 35218DEST_PATH_IMAGE001
If the threshold is considered as an input to a neuron
Figure 316344DEST_PATH_IMAGE004
Weight of (2)
Figure 26811DEST_PATH_IMAGE005
The above formula can be simplified to
Figure 646142DEST_PATH_IMAGE006
Wherein if the input sum exceeds a threshold, the neuron is activated and a second firing pulse is performed; if the sum of the inputs does not exceed the threshold, the output signal of the neuron is complemented back. The specific function is as follows:
Figure 758455DEST_PATH_IMAGE007
wherein,
Figure 687096DEST_PATH_IMAGE008
the number of the neuron is the number of the neuron,
Figure 884859DEST_PATH_IMAGE009
in order to input the signal, the signal is,
Figure 806417DEST_PATH_IMAGE010
as a weight value, the weight value,
Figure 179760DEST_PATH_IMAGE011
in order to be output, the output is,
Figure 420249DEST_PATH_IMAGE012
is the sum of the total number of the first time,
Figure 964363DEST_PATH_IMAGE013
is the potential of the membrane, and is,
Figure 315710DEST_PATH_IMAGE014
is a threshold value, and is,
Figure 510936DEST_PATH_IMAGE016
is an activation function. The training set is as follows:
Figure 922326DEST_PATH_IMAGE017
wherein, in the process,
Figure 156998DEST_PATH_IMAGE018
for training tuples, N is the unit and N is the number of neural network layers.
Unary linear regression model: x and y are two variables, the dependent variable y is influenced by the variable x, and the expression of the regression model is as follows:
Figure 843195DEST_PATH_IMAGE019
in a simple regression model, the regression function is a linear function that accounts for variables. Wherein
Figure 863234DEST_PATH_IMAGE020
Figure 242263DEST_PATH_IMAGE021
Is a coefficient of the regression to be,
Figure 760969DEST_PATH_IMAGE022
is a constant term and is a constant number,
Figure 454119DEST_PATH_IMAGE023
is a random error.
Figure 358359DEST_PATH_IMAGE024
I =1,2,3.. N is called a sample regression model, where the regression models of different samples from the same population have different error terms
Figure DEST_PATH_IMAGE025
Preprocessing the original data, extracting the characteristics of the original data, and predicting and identifying the conversion of the characteristics to obtain the change process and the result of the original data. Data AI corresponding to a dynamic video scene assists in performing a variety of mathematical calculations.
The invention belongs to the field of computer vision, and relates to a method for adding perception virtual manufacturing into a lens splitting system, which is improved in a brand-new manufacturing method, is based on an artificial intelligence learning algorithm and is combined with a space geometry motion algorithm, a controller algorithm, an improved communication algorithm, an ad hoc network distribution algorithm and real-time feedback to form the virtual scene three-dimensional shooting, interactive control and tracking control functions. The method can be adjusted freely according to the requirements of users, and has the advantages of strong real-time performance, vivid effect and the like.
Fig. 8 is a schematic diagram of a process of processing a file package according to the present invention, and as shown in fig. 8, a first module includes a file a, a second module includes a file B, and a third module includes a file C. And selecting images (a first mark, a second mark and a third mark) of different frames in the files in the three modules, and jointly rendering the images (a fourth mark) through asynchronous loading to obtain one image at a time.
The large data volume is transmitted in a short time and is completely stored, so that the memory is wasted, and the inter-phase frame loss processing is performed in the data encapsulation in the file packet, as shown in fig. 9, when the data volume transmitted in one second is large, the system can perform lossless dynamic compression and lossless dynamic decompression through the AI intelligent module. A is a packet of data, A always exists in a local place and is called from the local place, A 'is a compressed data packet, A' is a finally compressed data packet, A 'and A' are all a byte, and a plurality of packets of bytes are changed into one packet. As shown, the first packet through the fifth packet are not changed every frame, and only one command is sent. When data in the data packet changes, the frame extraction and frame loss processing is carried out on the changed part of the data packet based on the multi-data perception intelligent module, and two frames are combined into one frame. The second data packet in a to the third data packet in a' shown in fig. 9 is obtained by identifying and performing alternate frame dropping processing on the second data packet with change based on the multi-data perception intelligent module, and performing frame extraction and frame dropping on two frames and one frame. The same way can get the third data packet in A' to the fourth data packet in A ". (note: during rendering, based on the lost frame number automatically supplemented by the multi-data perception intelligent module, inserting frames at the same position of the third data packet in A', storing and recording the position of the lost frame number during frame extraction and frame loss processing by the multi-data perception intelligent module, and identifying the position of the lost frame during frame insertion to rapidly perform frame insertion processing so as to rapidly and automatically supplement the lost frame number and restore the lost frame number into a complete data packet A. The process is called lossless dynamic compression.
In the prior art, the split mirror and the machine position diagram cannot be unified. The file package in the invention can acquire one photo each time through asynchronous loading and joint rendering. The file package consists of two parts: the first portion comprises a planar picture; the second part is a three-dimensional model package. Specifically, the file package is rendered in a certain order, and composition of the plan view and calculation of the plan view are performed first. The method is based on artificial intelligence, and takes learning algorithm and mathematical algorithm as means to transmit images through a file package to process the images. For example: the file package has 1-n kinds of documents, and each kind of document can be executed simultaneously. Three different frame images can be displayed simultaneously in the playing image, each section is a section of data, the volume can be reduced by compressing the data into a section of file packet, and the displayed file packets can be rendered together through asynchronous loading to obtain one image.
The nine-square grid picture composition comprises: obtain the tangent plane picture of actual camera bitmap will the tangent plane picture is cut apart into nine equal parts, forms four nodical points, the rectangle scope in the middle of four nodical points is the range of walking of actual camera for ensure that the position is in visual range in real time, obtain the tangent plane picture of actual camera bitmap includes: and calculating a three-dimensional picture of the virtual scene according to the AI module, acquiring a sectional view from the three-dimensional picture, and determining the optimal position area of the camera according to the sectional view. The obtaining of the tangent plane diagram of the actual camera bitmap includes: and calculating a three-dimensional picture of the virtual scene according to the AI module, acquiring a sectional view from the three-dimensional picture, and determining the optimal position area of the camera according to the sectional view.
The mirror splitting method further comprises the following steps: and rendering the virtual scene, and setting the trend and the strength of the light source in the virtual scene.
The composition is divided into a plurality of kinds, such as squared figures, in a conventional photographing process. The traditional nine-grid picture composition is a two-dimensional picture composition, but the invention cuts a plane from a three-dimensional picture to determine the scale display of the optimal position area of the camera in the overlooking visual angle. As shown in fig. 4, in the plane composition of the nine-grid of the present invention, the relationship between the plurality of cameras and the scheduling object and the marker forms the minimum boundary of the nine-grid, i.e., the content contained in the nine-grid must contain a plurality of marker objects. By this method, the dispatch scale relationship of the plan view among the relationships among the plurality of cameras, motion and objects and fishing groups in the two-dimensional view is determined. The nine-square grid picture is in golden section proportion and comprises four focuses (only one or two points are occupied by an object picture).
The invention amplifies the image through multi-data perception, intelligent automation, mathematical calculation and intelligent scale adjustment. Specifically, a camera is first added to mark the digital actor and the position of the digital actor below is viewed from the map preview window 2. The positioning of the digital actor requires a plan view for operation, wherein the plan view is calculated using a squared figure. The golden section in the nine-square grid pattern has four focuses, and the position of an object is determined, so that the object can be calculated by occupying one focus or two focuses. The Sudoku algorithm of the invention determines the optimal position area of the actual camera for the three-dimensional viewing angle.
The nine-square grid calculation formula is as follows: s = (n ^2+ 1)/2, wherein S is the sum of elements in each row, and n is the order and represents the number of rows. The nine-square grid picture of the invention is that natural numbers of one to a plurality of numbers are arranged into squares with a plurality of numbers vertically and horizontally, so that the sum of a plurality of numbers in the same row, the same column and the same diagonal line is equal. The sum of the numbers of the row, column, main diagonal and pan diagonal of a phantom are all equal. The n-order magic square is an n-order square matrix formed by the first n ^2 natural numbers (the power of n is 2) in each row and each column, and the sum of the n numbers contained in the two diagonal lines is equal. The Sudoku calculation formula is mainly used for creating a control and an index according to the size of each object, the number of steps of each row and the distance of each column.
When the fluctuation of a shot is selected, the key frame can be marked, then the falling range of the shot is moved, the mark is carried out again, the moving position and the thumbnail of the camera can be seen in the map preview window 2, the method can be used for viewing the dynamic preview of the shot in real time, and the movement rate of the camera can be adjusted in real time, so that different mirror moving rhythms can be obtained.
Fig. 5 is a schematic diagram of an embodiment of a lens splitting system according to the present invention, and as shown in fig. 5, the lens splitting system at least includes a video animation window 1, a map preview window 2, a setting camera window 3, and an adjusting parameter window 4.
The video animation window 1 is a virtual scene, the virtual scene is a 3D scene, and the virtual scene can comprise a virtual camera, a light controller, a sound effect controller, a digital figure and the like. The map preview window 2 generates an actual machine bitmap corresponding to a virtual scene, and the preferred view angle of the map preview window 2 is an elevation angle. The adjustment of the video animation window 1 and the map preview window 2 is realized by adjusting the parameters of the camera window 3 and the parameter adjusting window 4. The focus focal length, the picture width, the scene, the shooting speed, the machine position, the prop, the figure and the like can be adjusted. The system can also output the split mirror table with different formats and lead in other engines for the next shooting.
The mirror splitting method further comprises the following steps: and rendering the virtual scene, and setting the trend and the strength of the light source in the virtual scene. Specifically, the rendered dynamic video can guide the light control work in real shooting in real time according to the trend and the intensity of the light source in the whole scene. And the process of changing the dynamic video scene is compared with the moving track of the digital actor in the real shooting process in real time.
When the system is implemented outdoors, a camera is added to mark digital roles, a map preview window 2 can view a plan view, view the positions of digital actors in a scene and adjust parameters of a lens, such as a focal point, a focal length, a picture frame, an aperture and the like, and when a single lens fluctuates, a key frame is marked and moves to a lens framing position, and then the key frame is marked again. The camera position and the thumbnail can be checked in the map preview window 2, the dynamic preview condition of the lens can be watched in real time, the camera movement can be adjusted to obtain different mirror moving rhythms, and adjusted lens information is marked after lens shooting is completed, such as shooting type, shooting method, shooting mode, selected type, mobile shooting mode and real-time shooting mode, and the operation only needs one lens. Fig. 6 is a table generation flow chart of a mirror-to-mirror method in the present invention, and as shown in fig. 6, the specific steps of mirror-to-mirror include: dynamically adjusting a video window and selecting to move for a certain distance; directly obtaining a machine bitmap; recording the video of the moving process by the dynamic preview window, and viewing the dynamic preview of the lens in real time; adjusting the motion rate of the camera in real time to obtain different mirror moving rhythms; adding marks to the manufactured lens; an electronic split mirror table is derived (which includes all the shot information, as shown in fig. 7). The system can derive information such as an electronic split mirror image, an output path, a scene name, a shooting mode, a method scene and the like. The lens splitting system also renders and outputs the manufactured lens, and all creators can enter a scene on site by utilizing the lens splitting system to carry out 360-degree non-dead-angle digital investigation.
The invention carries out real-time rendering output (real-time calculation and output of graphic data) on the manufactured lens, not only enables the virtual camera to enter the scene more intuitively, but also enables all creators to enter the scene on site, realizes digital investigation without dead angle of 360 degrees, carries out algorithm innovation on aspects of graphic API, special effect management, space division, scene graph structure, particle system and the like on the basis of an open source rendering engine, and realizes the real-time rendering function of 'one mirror to the bottom'.
The traditional green screen shooting consumes time and labor when scene arrangement is obtained, the trend and the strength of a light source in the whole scene can be obtained from a video after dynamic rendering, interaction is carried out according to the projection of light in the camera (fusion, and interaction is carried out according to the projection of the light in the picture fusion), and the efficiency of controlling light work is improved. The system sends information such as scenes, indexes, storyboards, machine position diagrams, actor station positions, camera parameters, description descriptions and the like to each department, so that distributed cooperation can be realized, and all departments can perform orderly cooperation. All departments can complete the whole shooting process only by holding the electronic split mirror, and the shooting efficiency is obviously improved.
The invention also proposes a lens-splitting device for audio-visual shooting, comprising: the device comprises a setting module, a judging module and a display module, wherein the setting module is used for setting a virtual scene, the virtual scene is a 3D scene, at least one virtual camera is arranged in the virtual scene, and the virtual camera has the same parameters with an actual camera which is actually shot; the acquisition module is used for setting virtual shooting parameters of the virtual camera according to the key frame; the first processing module is used for determining an actual camera bitmap according to the virtual shooting parameters, so that the virtual camera and the actual camera have the same shooting visual angle, and the actual camera bitmap and a real shot camera bitmap have the same proportion size; the actual camera bitmap comprises an actual shooting scene and a camera position dynamic line and is used for guiding the shooting of the actual scene; the virtual photographing parameters include at least one of a set-top information, a photographing scene, a photographing mode, and scheduling information. Determining an actual camera bitmap according to the virtual shooting parameters, wherein the virtual camera is consistent with the initial coordinate information of the actual camera in the actual camera bitmap; and when the virtual shooting parameters are adjusted, the actual camera bitmap changes in equal proportion. The first processing module is further configured to: when the virtual shooting parameters of the virtual camera are set in real time, the actual camera bitmap is continuous; and when the virtual shooting parameters of the virtual camera are set discontinuously, supplementing the virtual shooting parameters in the discontinuous period for obtaining a continuous machine position moving line. The lens splitting device further comprises: the second processing module is used for dividing the actual camera bitmap into nine equal parts to form four intersection points, and the camera position is arranged on the intersection points to ensure that the camera position is in a visible range in real time; the obtaining of the tangent plane diagram of the actual camera bitmap includes: and calculating a three-dimensional picture of the virtual scene according to the AI module, acquiring a sectional view from the three-dimensional picture, and determining the optimal position area of the camera according to the sectional view.
Based on the internet era, the traditional movie and animation production mode cannot meet the requirements of modern audiences. The invention breaks through the traditional film industry rule, gives greater autonomy to artists on the premise of ensuring the visual quality, and shortens the production period. The invention is provided with an image acquisition unit, an image processing and analyzing unit and an output control unit. Specifically, a deep learning algorithm is adopted, the video collected by the camera is analyzed and processed to obtain final results, and then the results are fed back to the user.
The invention provides a lens splitting method of a shot script for science fiction films, which is based on AI, multi-data perception, intelligent automation, a die design and manufacturing technology, a 'virtual photography' technology and multi-data cognition intellectualization of dynamic capture, realizes cognition of artificial intelligence on multi-data, and enables the method to support intelligent import of 3d film and television materials, convenient creation of a handheld wireless controller and intelligent generation of a 'storyboard'. The direct output of functions such as an electronic split mirror table is realized, and the system has the characteristics of virtual-real combination, real-time rendering, what you see is what you get and the like.
The invention discloses a lens splitting method for audio-visual shooting, which comprises the following steps: setting a virtual scene, wherein the virtual scene is a 3D scene, at least one virtual camera is arranged in the virtual scene, and virtual shooting parameters of the virtual camera are set according to a key frame; determining an actual camera bitmap according to the virtual shooting parameters, so that the virtual camera and the actual camera have the same shooting visual angle; the actual camera bitmap comprises an actual shooting scene and a camera position dynamic line and is used for guiding the shooting of the actual scene; the virtual photographing parameters include at least one of a set of information, a photographing scene, a photographing mode, and scheduling information. The method determines the actual camera bitmap according to the shot portrait and scene change, and is used for allocating the corresponding virtual camera and the actual camera for tracking, so that resource waste caused by information errors is avoided, the working efficiency of practitioners is improved, the shooting cost is reduced, and a director and actors are assisted to better adapt and complete the working content.
Although the embodiments of the present invention have been described in detail with reference to the accompanying drawings, the embodiments of the present invention are not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solutions of the embodiments of the present invention within the technical idea of the embodiments of the present invention, and these simple modifications all belong to the protection scope of the embodiments of the present invention.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, the embodiments of the present invention do not describe every possible combination.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional identical elements in the process, method, article, or apparatus comprising the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present application shall be included in the scope of the claims of the present application.

Claims (10)

1. A mirror splitting method for audio-visual shooting, comprising:
setting a virtual scene, wherein the virtual scene is a 3D scene, at least one virtual camera is arranged in the virtual scene, and the virtual camera has the same parameters with an actual camera which is actually shot;
setting virtual shooting parameters of the virtual camera according to the key frame;
determining an actual camera bitmap according to the virtual shooting parameters, so that the virtual camera and the actual camera have the same shooting visual angle, and the actual camera bitmap and a real shot camera bitmap have the same proportion size;
the actual camera bitmap comprises an actual shooting scene and a camera position dynamic line and is used for guiding the shooting of the actual scene;
the virtual photographing parameters include at least one of a set-top information, a photographing scene, a photographing mode, and scheduling information.
2. The mirror splitting method according to claim 1, wherein the determining an actual camera bitmap according to the virtual shooting parameters comprises:
the initial coordinate information of the virtual camera is consistent with that of the actual camera in the actual camera bitmap;
and when the virtual shooting parameters are adjusted, the actual camera bitmap changes along with the virtual shooting parameters in an equal proportion.
3. The method of splitting mirror according to claim 1,
when the virtual shooting parameters of the virtual camera are set in real time, the actual camera bitmap is continuous;
and when the virtual shooting parameters of the virtual camera are set discontinuously, supplementing the virtual shooting parameters in the discontinuous period for obtaining a continuous machine position moving line.
4. The mirror splitting method according to claim 1, further comprising: acquiring a section diagram of the actual camera bitmap, dividing the section diagram into nine parts to form four intersection points, wherein a rectangular range in the middle of the four intersection points is a displacement range of the actual camera and is used for ensuring that the camera position is in a visible range in real time;
the obtaining of the tangent plane diagram of the actual camera bitmap includes: and calculating a three-dimensional picture of the virtual scene according to the AI module, acquiring a sectional view from the three-dimensional picture, and determining the optimal position area of the camera according to the sectional view.
5. The method for splitting mirror according to claim 1, wherein said method for splitting mirror further comprises:
and rendering the virtual scene, and setting the trend and the strength of the light source in the virtual scene.
6. The method of splitting mirror according to claim 1,
the machine position information comprises at least one of machine position coordinates, a mirror moving track, a mirror moving speed and a mirror moving mode;
the mirror moving mode comprises at least one of mirror pushing, mirror pulling, mirror shaking, mirror moving and mirror following;
the shooting scenes comprise at least one of a panorama, a close scene, a long scene, a medium scene and a special scene;
the photographing mode includes at least one of a virtual mode, a roaming mode, a real-time mode, a sensing mode, and an LED mode;
the scheduling information comprises at least one of figure scheduling information and script scheduling information;
the shooting range of the virtual camera is 360 degrees;
the virtual scene is also provided with at least one of a light controller, a sound effect controller and a digital character.
7. A mirror splitting apparatus for audio-visual shooting, comprising:
the device comprises a setting module, a judging module and a display module, wherein the setting module is used for setting a virtual scene, the virtual scene is a 3D scene, at least one virtual camera is arranged in the virtual scene, and the virtual camera has the same parameters with an actual camera which is actually shot;
the acquisition module is used for setting virtual shooting parameters of the virtual camera according to the key frame;
the first processing module is used for determining an actual camera bitmap according to the virtual shooting parameters, so that the virtual camera and the actual camera have the same shooting visual angle, and the actual camera bitmap and a real shot camera bitmap have the same proportion size;
the actual camera bitmap comprises an actual shooting scene and a camera position dynamic line and is used for guiding the shooting of the actual scene;
the virtual photographing parameters include at least one of a set-top information, a photographing scene, a photographing mode, and scheduling information.
8. The apparatus of claim 7, wherein determining an actual camera bitmap from the virtual camera parameters comprises:
the initial coordinate information of the virtual camera is consistent with that of the actual camera in the actual camera bitmap;
and when the virtual shooting parameters are adjusted, the actual camera bitmap changes along with the virtual shooting parameters in an equal proportion.
9. The mirror splitting apparatus according to claim 7,
the first processing module is further configured to:
when the virtual shooting parameters of the virtual camera are set in real time, the actual camera bitmap is continuous;
and when the virtual shooting parameters of the virtual camera are set discontinuously, supplementing the virtual shooting parameters in the discontinuous period for obtaining a continuous machine position moving line.
10. The mirror splitting apparatus according to claim 7, further comprising:
the second processing module is used for dividing the actual camera bitmap into nine equal parts to form four intersection points, and the camera position is arranged on the intersection points to ensure that the camera position is in a visible range in real time;
the obtaining of the tangent plane diagram of the actual camera bitmap includes: and calculating a three-dimensional picture of the virtual scene according to the AI module, acquiring a sectional view from the three-dimensional picture, and determining the optimal position area of the camera according to the sectional view.
CN202211396457.XA 2022-11-09 2022-11-09 Method and device for splitting mirror Active CN115442542B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211396457.XA CN115442542B (en) 2022-11-09 2022-11-09 Method and device for splitting mirror

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211396457.XA CN115442542B (en) 2022-11-09 2022-11-09 Method and device for splitting mirror

Publications (2)

Publication Number Publication Date
CN115442542A true CN115442542A (en) 2022-12-06
CN115442542B CN115442542B (en) 2023-04-07

Family

ID=84252126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211396457.XA Active CN115442542B (en) 2022-11-09 2022-11-09 Method and device for splitting mirror

Country Status (1)

Country Link
CN (1) CN115442542B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115967779A (en) * 2022-12-27 2023-04-14 北京爱奇艺科技有限公司 Method and device for displaying bitmap of virtual camera machine, electronic equipment and medium
CN116017054A (en) * 2023-03-24 2023-04-25 北京天图万境科技有限公司 Method and device for multi-compound interaction processing
CN116320363A (en) * 2023-05-25 2023-06-23 四川中绳矩阵技术发展有限公司 Multi-angle virtual reality shooting method and system
CN118540575A (en) * 2024-07-24 2024-08-23 荣耀终端有限公司 Co-location shooting method, electronic device, storage medium and program product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226838A (en) * 2013-04-10 2013-07-31 福州林景行信息技术有限公司 Real-time spatial positioning method for mobile monitoring target in geographical scene
US20190102949A1 (en) * 2017-10-03 2019-04-04 Blueprint Reality Inc. Mixed reality cinematography using remote activity stations
CN111080759A (en) * 2019-12-03 2020-04-28 深圳市商汤科技有限公司 Method and device for realizing split mirror effect and related product
CN111476869A (en) * 2019-01-24 2020-07-31 湖南深度体验智能技术有限公司 Virtual camera planning method for computing media
CN111698390A (en) * 2020-06-23 2020-09-22 网易(杭州)网络有限公司 Virtual camera control method and device, and virtual studio implementation method and system
CN113411621A (en) * 2021-05-25 2021-09-17 网易(杭州)网络有限公司 Audio data processing method and device, storage medium and electronic equipment
CN114419212A (en) * 2022-01-19 2022-04-29 浙江博采传媒有限公司 Virtual preview method, device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226838A (en) * 2013-04-10 2013-07-31 福州林景行信息技术有限公司 Real-time spatial positioning method for mobile monitoring target in geographical scene
US20190102949A1 (en) * 2017-10-03 2019-04-04 Blueprint Reality Inc. Mixed reality cinematography using remote activity stations
CN111476869A (en) * 2019-01-24 2020-07-31 湖南深度体验智能技术有限公司 Virtual camera planning method for computing media
CN111080759A (en) * 2019-12-03 2020-04-28 深圳市商汤科技有限公司 Method and device for realizing split mirror effect and related product
CN111698390A (en) * 2020-06-23 2020-09-22 网易(杭州)网络有限公司 Virtual camera control method and device, and virtual studio implementation method and system
CN113411621A (en) * 2021-05-25 2021-09-17 网易(杭州)网络有限公司 Audio data processing method and device, storage medium and electronic equipment
CN114419212A (en) * 2022-01-19 2022-04-29 浙江博采传媒有限公司 Virtual preview method, device and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115967779A (en) * 2022-12-27 2023-04-14 北京爱奇艺科技有限公司 Method and device for displaying bitmap of virtual camera machine, electronic equipment and medium
CN116017054A (en) * 2023-03-24 2023-04-25 北京天图万境科技有限公司 Method and device for multi-compound interaction processing
CN116017054B (en) * 2023-03-24 2023-06-16 北京天图万境科技有限公司 Method and device for multi-compound interaction processing
CN116320363A (en) * 2023-05-25 2023-06-23 四川中绳矩阵技术发展有限公司 Multi-angle virtual reality shooting method and system
CN118540575A (en) * 2024-07-24 2024-08-23 荣耀终端有限公司 Co-location shooting method, electronic device, storage medium and program product

Also Published As

Publication number Publication date
CN115442542B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN115442542B (en) Method and device for splitting mirror
CN107067429A (en) Video editing system and method that face three-dimensional reconstruction and face based on deep learning are replaced
EP2600316A1 (en) Method, system and software program for shooting and editing a film comprising at least one image of a 3D computer-generated animation
US11425283B1 (en) Blending real and virtual focus in a virtual display environment
CN102741879A (en) Method for generating depth maps from monocular images and systems using the same
US11354774B2 (en) Facial model mapping with a neural network trained on varying levels of detail of facial scans
EP4111677B1 (en) Multi-source image data synchronization
CN108280873A (en) Model space position capture and hot spot automatically generate processing system
WO2023217138A1 (en) Parameter configuration method and apparatus, device, storage medium and product
US20210241486A1 (en) Analyzing screen coverage
CN111223190A (en) Processing method for collecting VR image in real scene
CN112562056A (en) Control method, device, medium and equipment for virtual light in virtual studio
CN212519183U (en) Virtual shooting system for camera robot
CN109863746B (en) Immersive environment system and video projection module for data exploration
CN117527993A (en) Device and method for performing virtual shooting in controllable space
CN110418056A (en) A kind of image processing method, device, storage medium and electronic equipment
Comino Trinidad et al. Easy authoring of image-supported short stories for 3d scanned cultural heritage
CN115497029A (en) Video processing method, device and computer readable storage medium
CN109389538A (en) A kind of Intelligent campus management system based on AR technology
CN114782600A (en) Video specific area rendering system and rendering method based on auxiliary grid
Takacs et al. Hyper 360—towards a unified tool set supporting next generation VR film and TV productions
CN116156250B (en) Video processing method and device
US11200732B1 (en) Efficiently determining an absorption coefficient of a virtual volume in 3D computer graphics
CN116055708B (en) Perception visual interactive spherical screen three-dimensional imaging method and system
Gao et al. Aesthetics Driven Autonomous Time-Lapse Photography Generation by Virtual and Real Robots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant