CN117425076B - Shooting method and system for virtual camera - Google Patents

Shooting method and system for virtual camera Download PDF

Info

Publication number
CN117425076B
CN117425076B CN202311736198.5A CN202311736198A CN117425076B CN 117425076 B CN117425076 B CN 117425076B CN 202311736198 A CN202311736198 A CN 202311736198A CN 117425076 B CN117425076 B CN 117425076B
Authority
CN
China
Prior art keywords
shooting
virtual camera
picture
angle
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311736198.5A
Other languages
Chinese (zh)
Other versions
CN117425076A (en
Inventor
曾建华
于洪举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan MgtvCom Interactive Entertainment Media Co Ltd
Original Assignee
Hunan MgtvCom Interactive Entertainment Media Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan MgtvCom Interactive Entertainment Media Co Ltd filed Critical Hunan MgtvCom Interactive Entertainment Media Co Ltd
Priority to CN202311736198.5A priority Critical patent/CN117425076B/en
Publication of CN117425076A publication Critical patent/CN117425076A/en
Application granted granted Critical
Publication of CN117425076B publication Critical patent/CN117425076B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a virtual camera shooting method and a system, wherein the method comprises the following steps: processing shooting tasks based on a preset recognition model, and outputting a lens-transporting mode of each virtual camera so that the virtual cameras shoot according to the corresponding lens-transporting mode, and determining a datum line based on shooting picture size aiming at each virtual camera; tracking character marking points of a shot picture, wherein the marking points are determined based on the mirror mode; and if the shooting angle is determined to be required to be adjusted based on the change of the character mark point and the datum line, controlling the virtual camera to adjust the shooting angle according to the movement of the character mark point. According to the invention, the shooting pictures in the shooting task are identified and corresponding mirror-moving modes are determined through the preset identification model obtained through training, so that the lenses are distinguished, the duty ratio of the character main body in the shooting pictures is controlled, and the shooting effect is improved.

Description

Shooting method and system for virtual camera
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method and a system for capturing images by using a virtual camera.
Background
At present, the virtual shooting is often carried out through fixed parameters, and because the shot scenes are different, the shooting is carried out through the mode, so that the wonderful lens cannot be captured in time, and the shooting effect is affected.
Disclosure of Invention
In view of the above, the embodiments of the present invention provide a method and a system for capturing images with a virtual camera, so as to solve the problem in the prior art that the capturing effect is affected.
In order to achieve the above object, the embodiment of the present invention provides the following technical solutions:
a first aspect of an embodiment of the present invention shows a virtual camera shooting method, applied to a virtual camera shooting system, including:
acquiring a shooting instruction of a virtual camera;
controlling virtual cameras corresponding to the number of cameras to start based on the shooting instruction of the virtual cameras;
inputting a preset recognition model according to shooting tasks and the number of cameras carried in the shooting instructions of the virtual cameras, processing the shooting tasks and the number of cameras based on the preset recognition model, and outputting a lens-carrying mode of each virtual camera so that the virtual cameras shoot according to the corresponding lens-carrying mode, wherein the preset recognition model is constructed based on sample data;
determining a reference line for each virtual camera based on a shooting picture size determined by a shooting task carried in a shooting instruction of the virtual camera;
Tracking character marking points of a shot picture, wherein the marking points are determined based on the mirror mode;
and if the shooting angle is determined to be required to be adjusted based on the change of the character mark point and the datum line, controlling the virtual camera to adjust the shooting angle according to the movement of the character mark point.
Optionally, for each virtual camera, collecting each frame shooting picture shot by the virtual camera;
for each frame of shooting picture, carrying out multi-dimensional identification on the shooting picture to obtain shooting picture elements corresponding to each dimension;
judging whether an abnormal element exists in the shooting picture element corresponding to each dimension;
and if the effective shooting picture does not exist, determining the shooting picture as the effective shooting picture, and storing the effective shooting picture.
Optionally, if the dimension is a multi-character main body interaction dimension, triggering a preset number of virtual cameras to perform preset multi-angle shooting on characters in a shooting picture.
Optionally, the process of constructing the preset recognition model based on the sample data includes:
acquiring historical shooting tasks consisting of shooting types and shooting time lengths of different shooting scenes, and a historical lens-transporting mode of each frame of shooting picture in each shooting task;
And based on the historical shooting task, carrying out deep learning training on the number of cameras required by the historical shooting task and a historical lens-carrying mode to obtain a preset recognition model after training is completed.
Optionally, the determining the reference line based on the size of the photographed image includes:
acquiring a corresponding shooting boundary line from a shooting task carried in the shooting instruction of the virtual camera;
determining a shooting boundary ratio based on the shooting boundary line;
determining a corresponding reference angle group by using the shooting boundary ratio;
and determining a corresponding datum line based on the datum angle and the shooting boundary line.
Optionally, the determining the corresponding reference angle group by using the shooting boundary ratio includes:
inputting the shooting boundary ratio into a processing model, processing the shooting boundary ratio based on the processing model, and outputting a reference angle group, wherein the processing model is constructed in advance;
the process of pre-constructing the processing model comprises the following steps:
different shooting boundary ratios are collected, and shooting effects in the datum lines determined by the different shooting boundary ratios under different datum angles are obtained;
training the initial model by using the different shooting boundary ratios, the different reference angles and the shooting effects in the reference lines determined by the different shooting boundary ratios under the different reference angles until the output shooting effects are in a preset range, and determining the initial model obtained by current training as a processing model.
Optionally, the tracking the character mark point of the shot picture includes:
setting character mark points according to character reference lines and a mirror mode;
tracking is carried out according to the character mark points in each frame of shooting picture.
Optionally, determining that the shooting angle needs to be adjusted based on the change of the character mark point and the datum line includes:
recording the moving range of each character mark point in a preset frame shooting picture to obtain a mark moving range;
determining a datum line where the character mark point is located based on the mark moving range;
based on whether the datum line interval of each character mark point is in a corresponding preset datum range or not;
if yes, whether the inner angles of the triangles drawn based on the character mark points respectively meet corresponding threshold conditions or not;
if any one is not satisfied, determining that the shooting angle needs to be adjusted.
A second aspect of an embodiment of the present invention shows a virtual camera shooting system, the system including:
the starting unit is used for acquiring shooting instructions of the virtual camera; controlling the starting of a corresponding number of virtual cameras based on the shooting instructions of the virtual cameras;
the virtual camera shooting device comprises a virtual camera shooting instruction, a preset recognition model, a sample data processing module and a sample data processing module, wherein the virtual camera shooting instruction is used for carrying out shooting on a shooting task carried in the virtual camera shooting instruction;
The processing unit is used for determining a datum line based on the shooting picture size aiming at each virtual camera, wherein the shooting picture size is determined through shooting tasks carried in shooting instructions of the virtual cameras; tracking character marking points of a shot picture, wherein the marking points are determined based on the mirror mode;
and the adjusting unit is used for determining that the shooting angle needs to be adjusted based on the change of the character mark point and the datum line, and controlling the virtual camera to adjust the shooting angle according to the movement of the character mark point.
A third aspect of the embodiment of the present invention shows a storage medium, where the storage medium includes a storage program, where when the program runs, the device where the storage medium is controlled to execute the virtual camera shooting method as shown in the first aspect of the embodiment of the present invention.
Based on the method and the system for shooting the virtual camera provided by the embodiment of the invention, the method comprises the following steps: when receiving a virtual camera shooting instruction, controlling the starting of a corresponding number of virtual cameras based on the virtual camera shooting instruction; inputting a preset recognition model according to a shooting task carried in the shooting instruction of the virtual camera, processing the shooting task based on the preset recognition model, and outputting a lens-moving mode of each virtual camera so that the virtual camera shoots according to the corresponding lens-moving mode, wherein the preset recognition model is constructed based on sample data; determining a reference line for each virtual camera based on a shooting picture size determined by a shooting task carried in a shooting instruction of the virtual camera; tracking character marking points of a shot picture, wherein the marking points are determined based on the mirror mode; and if the shooting angle is determined to be required to be adjusted based on the change of the character mark point and the datum line, controlling the virtual camera to adjust the shooting angle according to the movement of the character mark point. In the embodiment of the invention, the shooting pictures in the shooting task are identified and corresponding lens-moving modes are determined through the preset identification model obtained through training, and the lenses such as the near view, the middle view, the far view, the empty mirror and the like of the shooting pictures in the lens-moving modes are distinguished to control the duty ratio of the character main body in the shooting pictures, so that the shooting effect is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a connection architecture between a virtual camera shooting system and a virtual camera according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a virtual camera shooting system according to an embodiment of the present invention;
fig. 3 is a flowchart of a virtual camera shooting method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a datum line shown in an example of an embodiment of the present invention;
FIG. 5 is a schematic illustration of a labeling of a person's points of labeling, according to an embodiment of the present invention;
fig. 6 is a flowchart of another virtual camera shooting method according to an embodiment of the present invention;
fig. 7 is a schematic diagram of shot detection according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the description of "first", "second", etc. in this disclosure is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implying an indication of the number of technical features being indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
In this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Referring to fig. 1, a schematic diagram of a connection architecture between a virtual camera shooting system and a virtual camera is shown in an embodiment of the present invention.
The virtual camera shooting system 20 is connected with the virtual video camera 10 through a wireless communication device.
Wherein, the virtual camera shooting system 20 can be arranged in an upper computer; the virtual camera 10 may be an AI virtual camera or other type of virtual camera.
Based on the architecture shown in fig. 1, the embodiment of the present invention further shows a specific architecture of the virtual camera shooting system 20, as shown in fig. 2, where the virtual camera shooting system 20 includes:
A starting unit 201, configured to obtain a shooting instruction of a virtual camera; and controlling the starting of the corresponding number of virtual cameras based on the shooting instructions of the virtual cameras.
The preset recognition model 202 is configured to input a preset recognition model according to a shooting task carried in the shooting instruction of the virtual camera, process the shooting task based on the preset recognition model, and output a lens-moving mode of each virtual camera so that the virtual camera shoots according to the corresponding lens-moving mode, where the preset recognition model is constructed based on sample data.
A processing unit 203, configured to determine, for each virtual camera, a reference line based on a shooting screen size, where the shooting screen size is determined by a shooting task carried in a shooting instruction of the virtual camera; and tracking the character mark points of the shot picture, wherein the character mark points are determined based on the mirror mode.
And the adjusting unit 204 is used for determining that the shooting angle needs to be adjusted based on the change of the character mark point and the datum line, and controlling the virtual camera to adjust the shooting angle according to the movement of the character mark point.
Optionally, the process of constructing the preset recognition model 202 based on the sample data includes:
Acquiring historical shooting tasks consisting of shooting types and shooting time lengths of different shooting scenes, and a historical lens-transporting mode of each frame of shooting picture in each shooting task; and performing deep learning training based on the historical shooting task and the historical lens-carrying mode to obtain a preset recognition model after training is completed.
It should be noted that the specific implementation process of each unit shown above is the same as the specific implementation process of the virtual camera shooting method, and can be seen from each other.
In the embodiment of the invention, a preset recognition model obtained through training is used for recognizing and determining a corresponding lens-carrying mode of a shooting picture in a shooting task, a lens such as a near view, a middle view, a far view, an empty mirror and the like of the shooting picture in the lens-carrying mode are distinguished, and a reference line is determined for each virtual camera based on the size of the shooting picture, wherein the size of the shooting picture is determined through the shooting task carried in a shooting instruction of the virtual camera; tracking character marking points of a shot picture, wherein the marking points are determined based on the mirror mode; and if the shooting angle is determined to be required to be adjusted based on the change of the character mark point and the datum line, controlling the virtual camera to adjust the shooting angle according to the movement of the character mark point. So as to control the duty ratio of the character main body in the shooting picture, thereby improving the shooting effect.
Optionally, based on the virtual camera shooting system shown in the above embodiment of the present invention, the processing unit 203 is further configured to:
aiming at each virtual camera, collecting each frame shooting picture shot by the virtual camera; for each frame of shooting picture, carrying out multi-dimensional identification on the shooting picture to obtain shooting picture elements corresponding to each dimension; judging whether an abnormal element exists in the shooting picture element corresponding to each dimension; and if the effective shooting picture does not exist, determining the shooting picture as the effective shooting picture, and storing the effective shooting picture.
If the dimension is the interaction dimension of the multi-character main body, triggering a preset number of virtual cameras to perform preset multi-angle shooting on characters in a shooting picture.
Optionally, based on the virtual camera shooting system shown in the above embodiment of the present invention, the processing unit 203 for determining the reference line based on the shooting screen size is further configured to:
acquiring a corresponding shooting boundary line from a shooting task carried in the shooting instruction of the virtual camera; determining a shooting boundary ratio based on the shooting boundary line; determining a corresponding reference angle group by using the shooting boundary ratio; and determining a corresponding datum line based on the datum angle and the shooting boundary line.
Optionally, the determining the corresponding reference angle group by using the shooting boundary ratio includes:
and inputting the shooting boundary ratio into a processing model, processing the shooting boundary ratio based on the processing model, and outputting a reference angle group, wherein the processing model is constructed in advance.
The process of pre-constructing the processing model comprises the following steps:
different shooting boundary ratios are collected, and shooting effects in the datum lines determined by the different shooting boundary ratios under different datum angles are obtained; training the initial model by using the different shooting boundary ratios, the different reference angles and the shooting effects in the reference lines determined by the different shooting boundary ratios under the different reference angles until the output shooting effects are in a preset range, and determining the initial model obtained by current training as a processing model.
Optionally, based on the virtual camera shooting system shown in the foregoing embodiment of the present invention, the processing unit 203 for tracking the person mark points of the shot picture is further configured to:
setting character mark points according to character reference lines and a mirror mode; tracking is carried out according to the character mark points in each frame of shooting picture.
Optionally, based on the virtual camera shooting system shown in the foregoing embodiment of the present invention, the processing unit 203 configured to determine, according to the change of the person mark point and the reference line, that the shooting angle needs to be adjusted is further configured to:
Recording the moving range of each character mark point in a preset frame shooting picture to obtain a mark moving range; determining a datum line where the character mark point is located based on the mark moving range; based on whether the datum line interval of each character mark point is in a corresponding preset datum range or not; if yes, whether the inner angles of the triangles drawn based on the character mark points respectively meet corresponding threshold conditions or not; if any one is not satisfied, determining that the shooting angle needs to be adjusted.
Referring to fig. 3, a flowchart of a virtual camera shooting method according to an embodiment of the present invention is shown, where the method includes:
step S301: and acquiring shooting instructions of the virtual camera.
Optionally, the user inputs information such as scene parameters to be shot, virtual space time, equipment parameters of the virtual camera to be controlled, etc. based on the front end interface of the virtual camera shooting system, or inputs information such as shooting tasks, the number of virtual cameras, etc. based on the front end interface of the virtual camera shooting system.
The shooting task includes shooting scene, shooting duration, shooting picture size and shooting boundary line.
In the specific implementation process of step S301, the virtual camera shooting system generates the virtual camera shooting instruction based on the information such as the scene parameter to be shot, the virtual space time, the equipment parameter of the virtual camera to be controlled, and the like.
Step S302: and controlling the virtual cameras corresponding to the number of the cameras to start based on the shooting instruction of the virtual cameras.
In the specific implementation process of step S302, the corresponding number of virtual cameras is controlled to be started according to the number of virtual cameras in the shooting instruction of the virtual cameras.
Step S303: and inputting a preset recognition model according to shooting tasks and the number of cameras carried in the shooting instructions of the virtual cameras, processing the shooting tasks and the number of cameras based on the preset recognition model, and outputting a lens-transporting mode of each virtual camera so that the virtual cameras shoot according to the corresponding lens-transporting mode.
In step S303, the preset recognition model is constructed based on sample data;
it should be noted that, the process of constructing the preset recognition model based on the sample data includes:
step S11: and acquiring historical shooting tasks consisting of shooting types and shooting time lengths of different shooting scenes, the number of cameras required by the historical shooting tasks and a historical mirror mode of each frame of shooting picture in each shooting task.
Step S12: and based on the historical shooting task, carrying out deep learning training on the number of cameras required by the historical shooting task and a historical lens-carrying mode to obtain a preset recognition model after training is completed.
In the specific implementation step S12, training the number of cameras required by the historical shooting task and the historical lens-operating mode by using the initial model constructed by the deep learning algorithm until the lens-operating mode of the number of virtual cameras output by the initial model is the same as the corresponding historical lens-operating mode, and determining that the current initial model is a preset identification model.
In the specific implementation process of the specific implementation step S303, the shooting tasks and the number of cameras carried in the shooting instruction of the virtual cameras are processed through a preset recognition model, and the lens-transporting modes of the virtual cameras with the number of cameras are output, so that the virtual cameras shoot according to the corresponding lens-transporting modes.
The mirror-moving mode includes close shot, middle shot, long shot and the like.
Step S304: for each virtual camera, a reference line is determined based on the photographed screen size.
In step S304, the size of the shooting picture is determined by a shooting task carried in the shooting instruction of the virtual camera.
It should be noted that, the process of implementing step S304 specifically includes the following steps:
Step S21: and acquiring a corresponding shooting boundary line from a shooting task carried in the shooting instruction of the virtual camera.
The shooting boundary line includes a long boundary line and a wide boundary line of the camera.
Wherein the long border line includes an upper border line and a lower border line, and the wide border line includes a left border line and a right border line.
Step S22: and determining a shooting boundary ratio based on the shooting boundary line.
In the specific implementation of step S22, the ratio of the long border line to the wide border line is calculated by photographing the border line, and the photographing border ratio is obtained.
Step S23: and determining a corresponding reference angle group by using the shooting boundary ratio.
It should be noted that the corresponding reference angle group may be determined by a preset construction process model, or a correspondence between a pre-stored shooting boundary ratio and the reference angle group.
In a first embodiment, the process of presetting the process model includes:
step S31: and acquiring different shooting boundary ratios, and determining shooting effects in the datum lines under different datum angles by the different shooting boundary ratios.
Step S32: training the initial model by using the different shooting boundary ratios, the different reference angles and the shooting effects in the reference lines determined by the different shooting boundary ratios under the different reference angles until the output shooting effects are in a preset range, and determining the initial model obtained by current training as a processing model.
In the specific implementation step S32, taking the shooting effects in the datum lines determined by the different shooting boundary ratios, the different reference angles and the different shooting boundary ratios under the different reference angles as the sample data, and dividing the sample data into a training set and a test set; training the initial model based on the training set to obtain a trained initial model; testing the initial model by using a test set to obtain a predicted reference angle group; judging whether each prediction reference angle group in the prediction voltage groups is in a preset range, and if so, determining an initial model obtained by current training as a prediction model.
It should be noted that, the preset range includes reference angles in which all shooting effects in the test set are prioritized.
In the specific implementation process of step S23, the shooting boundary ratio is input into a processing model, the shooting boundary ratio is processed based on the processing model, and a reference angle group is output, that is, the shooting boundary ratio is processed based on the processing model obtained by training in steps S31 to S32, and the corresponding reference angle group is output.
In the second embodiment, the correspondence between the pre-stored shooting boundary ratios and the reference angle group is described, specifically, for each shooting boundary ratio, the shooting effect in the reference line that can be determined by the shooting boundary ratio at different reference angles is recorded; and determining a reference angle with excellent shooting effect, setting the corresponding relation between the reference angle and the shooting boundary ratio, and the like until the corresponding relation between different shooting boundary ratios and the reference angle group is determined.
The reference angles at which the photographing effect is excellent are at least 3.
In the specific implementation process of step S23, a correspondence between a pre-stored shooting boundary ratio and a reference angle group is searched, and a reference angle group corresponding to the shooting boundary is determined.
Step S24: and drawing a corresponding datum line based on the datum angle group and the shooting boundary line.
It should be noted that, in the specific implementation process of step S24, the method includes the following steps:
firstly, according to a first reference angle in the reference angle group, taking a left boundary line and a right boundary line in shot boundary lines as the base of a triangle, and respectively drawing the first triangle by taking the first reference angle as a base angle; taking the vertex angle of each first triangle as a datum point; then, respectively drawing second triangles by taking the upper boundary line and the lower boundary line as the bottom line and taking the waist line of the first triangle as an extension line, wherein the vertex angle of each second triangle is taken as a reference point; drawing corresponding datum lines in parallel or perpendicular to the left boundary line and the right boundary line by taking the datum points as basic points; then, according to a second reference angle in the reference angle group, adjusting the positions of the first triangle and the second triangle to determine a new reference point; and drawing corresponding datum lines in parallel or perpendicular to the left borderline and the right borderline by taking the new datum point as a base point, and the like until all the datum lines are drawn.
In order to better understand the procedure shown in step S24 described above, an example will be illustrated below, as shown in fig. 4.
For example: if the shooting boundary ratio 16:9 determines a corresponding reference angle group Q, the reference angle group Q includes a first reference angle r1, a second reference angle r2, a third reference angle r3, a fourth reference angle r4 and a fifth reference angle r5.
Drawing a first triangle S1 by taking the left boundary line A1 of the shot boundary lines as the base of the triangle, the angle a2 as 45 degrees, the angle a3 as 45 degrees, and the angle a3 as 45 degrees as the base angle according to a first reference angle r1 in the reference angle group Q, wherein the angle A1 can be determined to be a right angle through the first triangle S1, and the vertex is established to be a reference point H1; the first triangle S2 is drawn with the first reference angle r1, i.e., the angle B2 of 45 °, the angle B3 of 45 °, the right boundary B1 of the shot boundary lines of the triangle, the angle B2 of 45 °, i.e., the angle B3 of 45 °, as the base angle, and the vertex of the triangle is established as the reference point H3 by determining the angle B1 as a right angle from the first triangle S2. Taking the vertex angles of the first triangle S1 and the first triangle S2 as base points, then taking the lower boundary line F1 as a bottom line, and drawing a second triangle Y1 by taking the waist lines of the first triangle S1 and the first triangle S2 as extension lines, wherein the vertex is determined to be a datum point H2 through the second triangle Y1; and drawing a second triangle Y2 by taking the vertex angles of the first triangle S1 and the first triangle S2 as base points, taking the upper boundary line F4 as the base line and taking the waist lines of the first triangle S1 and the first triangle S2 as extension lines, and determining the vertex as a datum point H4 through the second triangle Y2.
Drawing a corresponding datum line G7 by taking the datum point H2 and H3 as a base point and drawing a left boundary line A1 and a right boundary line B1 in parallel; the reference points H1 and H3 are drawn parallel to the left boundary line A1 and the right boundary line B1, and the corresponding reference lines G2 and G3 are drawn. The datum points H2 and H4 are taken as base points, the right boundary line B1 and the left boundary line A1 are perpendicular to each other, and corresponding datum lines F6 and F7 are drawn.
Next, new reference points H1, H2, H3, and H4 are determined according to the second reference angle r2 in the reference angle group, that is, angles a3 and b2 are 20 ° and angles a2, b3 are 70 °, to adjust the positions of the first triangles S1, S2 and the second triangles Y1, Y2; the new datum points H1 and H3 are taken as base points, the base points are drawn parallel to the left boundary line A1 and the right boundary line B1, and corresponding datum lines G1 and G4 are drawn.
Then, according to a third reference angle r3 of the reference angle group, i.e. angle a2 and angle a3 together being 110 °, angle b2 and angle b3 together being 110 °, i.e. the angle of angle a1 is 70 °, and the angle of angle b1 is also 70 °; to adjust the positions of the first triangles S1, S2 and the second triangles Y1, Y2 to determine new reference points H1, H2, H3, and H4; the new datum points H1 and H3 are taken as base points, the base points are drawn parallel to the left boundary line A1 and the right boundary line B1, and corresponding datum lines G5 and G6 are drawn.
Then, according to a fourth reference angle r4 in the reference angle group, the angle d1 is 86 °, and the same angle c1 is 86 °. The lower boundary line F1 is taken as a base line, the angle d1 is 86 degrees, the waist lines of the first triangle S1 and the first triangle S2 are taken as extension lines, the second triangle Y1 is drawn, and the position datum point of the vertex at the moment is determined to be H2; drawing a corresponding datum line F5 by taking the datum point H2 as a base point and perpendicular to the left boundary line A1 and the right boundary line B1; similarly, the upper boundary line F4 is taken as the base line, the angle c1 is 86 °, the waist lines of the first triangle S1 and the first triangle S2 are taken as the extension lines, the second triangle Y2 is drawn, the position reference point of the vertex at this time is determined to be H4, the reference point H4 is taken as the base point, the right boundary line B1 is drawn perpendicular to the left boundary line A1, and the corresponding reference line F8 is drawn.
It should be noted that the setting of the angle d1 to 86 ° means that when d2 and d3 are kept constant, the angle of the top d1 is set to be greater than 83.3 ° because the angle is at least about 83.3 ° according to the cosine law, that is, 86 °, and the angle c1 is set to be 86 °.
Finally, according to a fifth reference angle r5 in the reference angle group, the angle d1 is 140 °, and the same angle c1 is 140 °. The lower boundary line F1 is taken as a base line, the angle d1 is 140 degrees, the waist lines of the first triangle S1 and the first triangle S2 are taken as extension lines, the second triangle Y1 is drawn, and the position datum point of the vertex at the moment is determined to be H2; drawing a corresponding datum line F11 by taking the datum point H2 as a base point and perpendicular to the left boundary line A1 and the right boundary line B1; similarly, the upper boundary line F4 is taken as the base line, the angle c1 is 140 °, the waist lines of the first triangle S1 and the first triangle S2 are taken as the extension lines, the second triangle Y2 is drawn, the position reference point of the vertex at this time is determined to be H4, the reference point H4 is taken as the base point, the right boundary line B1 is drawn perpendicular to the left boundary line A1, and the corresponding reference line F12 is drawn.
It is noted that the current shot image ratio is known to be 16:9, and the angle d1 and the angle c1 are set to 140 ° based on the screen dichotomy.
Step S305: and tracking the character mark points of the shot picture.
In step S305, the mark point is determined based on the mirror mode.
It should be noted that, the process of implementing step S305 specifically includes the following steps:
step S31: and setting character mark points according to the character reference line and the mirror mode.
Note that, the mark points corresponding to the different mirror modes are also different.
The number of the marking points also changes according to the different mirror modes.
If the lens-transporting mode is close-range shooting, the number of marking points is 3, and the method specifically comprises the following steps: marking points R1, R2 and R3, wherein the marking point R1 is a head marking point, R2 is a left shoulder marking point, and R3 is a right shoulder marking point, as shown in a part x1 in fig. 5.
If the lens-transporting mode is a middle view shooting, the number of marking points is 3, and the method specifically comprises the following steps: the mark points R1, R2 and R3, the mark point R1 being a head mark point, R2 being a left waist mark point, and R3 being a right waist mark point, as shown in the x2 portion of fig. 5.
If the lens-transporting mode is panoramic shooting, the number of marking points is 5, and the method specifically comprises the following steps: the marking points R1, R2, R3, R4 and R5, the marking point R1 being a head marking point, R2 being a left waist marking point, R3 being a right waist marking point, R4 being a left ankle marking point, R5 being a right ankle marking point, as shown in part x3 in fig. 5.
In the specific implementation process of step S31, if the lens-moving mode is close-range shooting, marking a head mark point R1, a left shoulder mark point R2 and a right shoulder mark point R3 in each frame of shooting picture according to a character reference line; if the lens-carrying mode is the middle view shooting, marking a head mark point R1, a left waist mark point R2 and a right waist mark point R3 in each frame of shooting pictures according to a character reference line; if the lens-transporting mode is panoramic shooting, marking a head mark point R1, a left waist mark point R2, a right waist mark point R3, a left ankle mark point R4 and a right ankle mark point R5 in each frame of shooting pictures according to character reference lines.
The character reference line is a reference line of different positions of the human body set by a technician according to an ergonomic experiment.
Step S32: tracking is carried out according to the character mark points in each frame of shooting picture.
In the specific implementation step S32, the movement of each person marking point is marked according to the change of the photographed image, so as to form the tracking of the person marking point.
Step S306: and determining whether the shooting angle needs to be adjusted based on the change of the character mark points, if so, executing step S307, and if not, returning to executing step S305.
It should be noted that, there are multiple implementations of the process of implementing step S306.
A first embodiment comprises the steps of:
step S41: and recording the moving range of each character mark point in a preset frame shooting picture respectively to obtain a mark moving range.
The mark moving range includes the moving range of each mark point such as the head mark moving range.
Step S42: and determining a datum line where the character mark point is based on the mark moving range.
In the specific implementation process of step S42, the reference line interval of each object marking point is determined by marking the moving range.
For example: the moving range of the character mark point R1 is determined to be between the [ reference line F5, reference line F6], and between the [ reference line G5, reference line G6 ].
Step S43: based on whether the reference line interval of each character mark point is within the corresponding preset reference range, if any one is not, step S44 is executed, and if both are, step S305 is executed again.
The preset reference range is set according to a plurality of tests.
In the specific implementation process of step S43, whether the reference line interval of each character mark point is within the corresponding preset reference range is compared, if any one is not, step S44 is executed, and if all are, step S305 is executed again.
Step S44: based on whether the interior angles of the triangle drawn by the character mark points respectively meet the corresponding threshold conditions, if any one of the interior angles does not meet the corresponding threshold conditions, step S46 is executed, and if both interior angles meet the corresponding threshold conditions, step S305 is executed again.
It should be noted that, in the specific implementation process of step S44, the method includes the following steps:
step S51: and taking the second mark point and the third mark point in the character mark points as reference points of the first triangle to adjust the inner angles of the first triangle, namely taking the second mark point and the third mark point in the character mark points as vertexes of the first triangle respectively, drawing the first triangle, and determining the inner angles of each triangle.
Step S52: judging whether the angles of the inner angles in the first triangle respectively meet corresponding threshold conditions, and in the specific implementation step S45, overlapping the datum point H1 with a second mark point R2 in the character mark points and overlapping the datum point H3 with a third mark point R3 in the character mark points so as to adjust the first triangles S1 and S2; and determining whether the inner angles of the adjusted first triangles S1 and S2 respectively meet the corresponding threshold conditions, if any one of the inner angles does not meet the corresponding threshold conditions, executing step S46, and if both inner angles meet the corresponding threshold conditions, returning to executing step S305.
It should be noted that, if the lens-transporting mode is close-up, the threshold condition is whether the internal angles of the adjusted first triangles S1 and S2 are smaller than the first angle, and the critical value of the internal angle is the first angle, and when in practical application, the threshold ratio range is exceeded, the shot image is adjusted for the lens.
Wherein the first angle is set by a plurality of experiments, for example, may be set to 45 °.
The first angles corresponding to the inner angles of the first triangles S1 and S2 may be the same or different.
Optionally, if the lens-transporting mode is close-up, the method further includes: whether the second mark point R2 and the third mark point R3 are not lower than the reference line F11 is determined, if not, it is determined that the step S46 can be executed, and if yes, the step S305 is executed again.
If the lens-transporting mode is the middle view, the threshold condition is whether the internal angles of the adjusted first triangles S1 and S2 are smaller than the second angle, and the critical value of the internal angle is the second angle, and when the lens-transporting mode is in practical application, the shot images of the lens are adjusted beyond the threshold ratio range.
Optionally, if the lens-transporting mode is a middle view, the method further includes: whether the second mark point R2 and the third mark point R3 are not lower than the reference line F11 is determined, if not, it is determined that the step S46 can be executed, and if yes, the step S305 is executed again.
The second angle is set through multiple experiments, and the second angle can be the same as or different from the first angle, and can be set to 45 degrees.
The second angles corresponding to the inner angles of the first triangles S1 and S2 may be the same or different.
If the lens-transporting mode is long-range, the threshold condition is whether the internal angles of the adjusted first triangles S1 and S2 are larger than the third angle, and the critical value of the internal angle is the third angle, and when the lens-transporting mode is in practical application, the shooting picture adjustment is carried out on the lens beyond the threshold ratio range.
Optionally, if the lens-transporting mode is a perspective, the method further includes: whether the fourth mark point R4 and the fifth mark point R5 are not lower than the reference line F11 is determined, if not, it is determined that the step S46 can be executed, and if yes, the step S305 is executed again.
The third angle is set through multiple experiments, and the third angle can be the same as or different from the second angle and/or the first angle, and can be set to be 45 degrees.
The third angles corresponding to the inner angles of the first triangles S1 and S2 may be the same or different.
In order to better understand the method shown in step S44 described above, the following is exemplified by different examples.
In the first example, if the frame n1 is shot in the near or intermediate view mode, determining that the moving range of R1 is between the [ reference line F5, reference line F6] and between the [ reference line G5, reference line G6 ]; in the n1+1 frame image, the lock R1 is set between [ reference lines G6, G7], and at this time, the angle d2 is reduced and the angle d3 is increased, so that it is determined that the current human face body moves toward the right boundary line B1.
When the face body of the person deviates from the line B1, the moving range intervals of R2 and R3 are respectively between [ reference lines G1 and G6], between [ reference lines G5 and G4], and the reference points H1 and H3 are respectively overlapped with R2 and R3 to adjust the first triangle, so as to obtain whether the inner angle of the first triangle is smaller than the corresponding threshold condition, and when the inner angle is actually applied, the threshold condition is not met, and the shot picture is adjusted.
In a second example, if the frame n1 is photographed in the telescopic mode, the moving range of R1 is determined to be between the reference lines F5 and F6 and between the reference lines G5 and G6; in the n1+1 frame image, the lock R1 is set between [ reference lines G6, G7], and at this time, the angle d2 is reduced, the angle d3 is increased, and the current character face body is determined to move toward the right boundary line B1 by dividing the image with the G7 line.
When the face body of the person deviates from the line B1, the moving range intervals of R2 and R3 are respectively between [ reference lines G1 and G6], between [ reference lines G5 and G4], and the reference points H1 and H3 are respectively overlapped with R2 and R3 to adjust the first triangle, so as to obtain whether the inner angle of the first triangle is larger than the corresponding threshold condition, and when the inner angle is actually applied, the threshold condition is not met, and the shot picture is adjusted.
Step S45: it is determined that the photographing angle needs to be adjusted.
Step S307: and controlling the virtual camera to adjust the shooting angle according to the movement of the character mark point.
In the process of implementing step S307, when the person turns around or leans on, the person mark points R1, R2, R3 are changes that will shift; drawing a triangle through the character mark points R1, R2 and R3, wherein the corresponding inner angles R11, R22 and R33 of the triangle are changed when the character mark points R1, R2 and R3 are displaced; and recording the angle threshold ratios of the inner angles r11, r22 and r33, searching the corresponding relation between the angle threshold ratios of the inner angles r11, r22 and r33 and the shooting angles, determining the corresponding shooting angles, and controlling the virtual camera to shoot according to the shooting angles.
It should be noted that, the correspondence between the angle threshold ratio of the internal angles r11, r22, r33 and the photographing angle is set in advance according to a plurality of experiments.
For example: when a person turns to the left, the R3 point is offset downwards due to perspective, meanwhile, the distance between R3 and R2 is shortened, the angles of the inner angles R22 and R33 are increased, the angle of the inner angle R11 is reduced, the camera can return to the original position by referring to the initial mean value angle value, the angles of R11, R22 and R33 are adjusted, the virtual camera also rotates to the left and moves, meanwhile, the head module of the main body is combined to judge, the direction of the current person body and the advancing direction are adjusted, the size of the angles of R22 and R33 is adjusted, more space is reserved for the direction of the main body of the person, the rancour face in front is avoided, and the side face, the micro side face and the 45-degree side face can be adjusted, so that better image shooting expression is realized. And continuously learning by continuously adding manually marked standard materials, and adjusting the angle threshold ratios of r11, r22 and r33 to form the corresponding relation between the angle threshold ratios of the inner angles r11, r22 and r33 and shooting angles.
In the embodiment of the invention, a preset recognition model obtained through training is used for recognizing and determining a corresponding lens-carrying mode of a shooting picture in a shooting task, a lens such as a near view, a middle view, a far view, an empty mirror and the like of the shooting picture in the lens-carrying mode are distinguished, and a reference line is determined for each virtual camera based on the size of the shooting picture, wherein the size of the shooting picture is determined through the shooting task carried in a shooting instruction of the virtual camera; tracking character marking points of a shot picture, wherein the marking points are determined based on the mirror mode; and if the shooting angle is determined to be required to be adjusted based on the change of the character mark point and the datum line, controlling the virtual camera to adjust the shooting angle according to the movement of the character mark point. So as to control the duty ratio of the character main body in the shooting picture, thereby improving the shooting effect.
Optionally, based on the method shown in the foregoing embodiment of the present invention, as shown in fig. 6, the method further includes:
step S601: aiming at each virtual camera, collecting each frame shooting picture shot by the virtual camera;
step S602: for each frame of shooting picture, carrying out multi-dimensional identification on the shooting picture to obtain shooting picture elements corresponding to each dimension;
in the specific implementation process of step S601 to step S602, for each virtual camera, first, each frame of shooting picture shot by the virtual camera is collected; and identifying each frame of shooting picture according to the dimensions such as the background extraction dimension, the head main body dimension, the event dimension, the character main body feature dimension, the shooting picture formation dimension and the like, and obtaining shooting picture elements corresponding to each dimension.
It should be noted that the event dimension further includes a multi-character main body interaction dimension.
Step S603: whether or not the abnormal element exists in the photographed picture element corresponding to each dimension is determined, if not, step S604 is executed, and if any exists, the photographed picture of the frame is deleted.
In the specific implementation process of step S603, if the dimension is extracted for the background, determining a foreground element, a background element and a scene collision element in the shot picture element; judging whether a shielding object element for a person main body exists in the foreground elements aiming at the foreground elements; judging whether the sky light source direction in the background element is consistent with the preset light source direction or not according to the background element; judging whether an obstacle element which is inconsistent with a scene exists in the scene collision elements aiming at the scene collision elements; if none exists, the judgment of the next dimension is executed, and if any exists, the frame shooting picture is deleted.
If the dimension is the dimension of the head main body, the shooting picture elements comprise elements such as the front face, the side face, the back face and the like of the head; judging whether the front element accords with the preset element condition, if so, executing the judgment of the next dimension, and if not, deleting the frame shooting picture.
The preset element condition refers to that the front direction of the head of the person is kept to be a preset number of shooting picture empty mirrors.
If the event dimension is the event dimension, whether a plurality of main body characters exist in the shooting picture element or not, if so, determining that the multi-character main body interaction dimension exists, and triggering a preset number of virtual cameras to perform preset multi-angle shooting on the characters in the shooting picture.
The preset number is determined according to the number of the main body characters.
Optionally, a position information synchronous hanging point is attached to the creation of the multiple main body characters, the virtual cameras can continuously exchange information, the position range of the multiple main body characters is judged, when the position reaches a position threshold value which can be interacted with, multiple additional AI virtual cameras can be created to carry out panoramic shooting, whether interaction is carried out or not, events of meeting of the multiple main bodies can be recorded in multiple angles, and the situation that interaction lacks shooting picture materials is avoided.
Optionally, the method further comprises extracting feature dimensions of the character main body, and marking triangle mark points of the main body character according to different lens transportation modes, wherein positions of the mark points are different in lenses of different lens transportation modes.
Specifically, if the lens-transporting mode is close-up, the number of marking points is 3, which specifically includes: marking points R1, R2 and R3, wherein the marking point R1 is a head marking point, R2 is a left shoulder marking point, and R3 is a right shoulder marking point.
If the lens-transporting mode is a middle view, the number of marking points is 3, and the method specifically comprises the following steps: marking points R1, R2 and R3, wherein the marking point R1 is a head marking point, R2 is a left waist marking point, and R3 is a right waist marking point.
If the lens-transporting mode is panoramic, the number of marking points is 5, and the method specifically comprises the following steps: the marking points R1, R2, R3, R4 and R5, wherein the marking point R1 is a head marking point, R2 is a left waist marking point, R3 is a right waist marking point, R4 is a left ankle marking point, and R5 is a right ankle marking point.
If the dimension is the shot frame configuration dimension, it is determined whether the shot frame element is the shot frame configuration element, if so, step S604 is executed, and if not, the frame shot frame is deleted.
The imaging screen component is data of a predetermined imaging screen configuration.
Note that the process of embodying step S603 may be illustrated by fig. 7.
Step S604: and determining the shooting picture as an effective shooting picture, and storing the effective shooting picture.
In the embodiment of the invention, aiming at each virtual camera, collecting each frame shooting picture shot by the virtual camera; for each frame of shooting picture, carrying out multi-dimensional identification on the shooting picture to obtain shooting picture elements corresponding to each dimension; judging whether an abnormal element exists in the shooting picture element corresponding to each dimension; and if the effective shooting picture does not exist, determining the shooting picture as the effective shooting picture, and storing the effective shooting picture. And the effective shooting pictures are saved by judging each frame of shooting pictures, so that the shooting effect is improved.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part. The systems and system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A virtual camera shooting method, characterized by being applied to a virtual camera shooting system, the method comprising:
acquiring a shooting instruction of a virtual camera;
controlling virtual cameras corresponding to the number of cameras to start based on the shooting instruction of the virtual cameras;
inputting a preset recognition model according to shooting tasks and the number of cameras carried in the shooting instructions of the virtual cameras, processing the shooting tasks and the number of cameras based on the preset recognition model, and outputting a lens-carrying mode of each virtual camera so that the virtual cameras shoot according to the corresponding lens-carrying mode, wherein the preset recognition model is constructed based on sample data;
determining a reference line for each virtual camera based on a shooting picture size determined by a shooting task carried in a shooting instruction of the virtual camera;
tracking character marking points of a shot picture, wherein the marking points are determined based on the mirror mode;
and if the shooting angle is determined to be required to be adjusted based on the change of the character mark point and the datum line, controlling the virtual camera to adjust the shooting angle according to the movement of the character mark point.
2. The method as recited in claim 1, further comprising:
aiming at each virtual camera, collecting each frame shooting picture shot by the virtual camera;
for each frame of shooting picture, carrying out multi-dimensional identification on the shooting picture to obtain shooting picture elements corresponding to each dimension;
judging whether an abnormal element exists in the shooting picture element corresponding to each dimension;
and if the effective shooting picture does not exist, determining the shooting picture as the effective shooting picture, and storing the effective shooting picture.
3. The method as recited in claim 2, further comprising:
and if the dimension is the interaction dimension of the multi-character main body, triggering a preset number of virtual cameras to perform preset multi-angle shooting on characters in a shooting picture.
4. The method of claim 1, wherein constructing the pre-set recognition model based on the sample data comprises:
acquiring historical shooting tasks consisting of shooting types and shooting time lengths of different shooting scenes, and a historical lens-transporting mode of each frame of shooting picture in each shooting task;
and based on the historical shooting task, carrying out deep learning training on the number of cameras required by the historical shooting task and a historical lens-carrying mode to obtain a preset recognition model after training is completed.
5. The method of claim 1, wherein the determining the reference line based on the photographed picture size comprises:
acquiring a corresponding shooting boundary line from a shooting task carried in the shooting instruction of the virtual camera;
determining a shooting boundary ratio based on the shooting boundary line;
determining a corresponding reference angle group by using the shooting boundary ratio;
and determining a corresponding datum line based on the datum angle and the shooting boundary line.
6. The method of claim 5, wherein determining the corresponding set of reference angles using the shot boundary ratio comprises:
inputting the shooting boundary ratio into a processing model, processing the shooting boundary ratio based on the processing model, and outputting a reference angle group, wherein the processing model is constructed in advance;
the process of pre-constructing the processing model comprises the following steps:
different shooting boundary ratios are collected, and shooting effects in the datum lines determined by the different shooting boundary ratios under different datum angles are obtained;
training the initial model by using the different shooting boundary ratios, the different reference angles and the shooting effects in the reference lines determined by the different shooting boundary ratios under the different reference angles until the output shooting effects are in a preset range, and determining the initial model obtained by current training as a processing model.
7. The method of claim 1, wherein tracking the person mark point of the photographed picture comprises:
setting character mark points according to character reference lines and a mirror mode;
tracking is carried out according to the character mark points in each frame of shooting picture.
8. The method of claim 1, wherein determining that the shooting angle needs to be adjusted based on the change in the persona marker point and the fiducial line comprises:
recording the moving range of each character mark point in a preset frame shooting picture to obtain a mark moving range;
determining a datum line where the character mark point is located based on the mark moving range;
based on whether the datum line interval of each character mark point is in a corresponding preset datum range or not;
if yes, whether the inner angles of the triangles drawn based on the character mark points respectively meet corresponding threshold conditions or not;
if any one is not satisfied, determining that the shooting angle needs to be adjusted.
9. A virtual camera shooting system, the system comprising:
the starting unit is used for acquiring shooting instructions of the virtual camera; controlling the starting of a corresponding number of virtual cameras based on the shooting instructions of the virtual cameras;
The preset recognition model is used for inputting a preset recognition model according to shooting tasks and the number of cameras carried in the shooting instructions of the virtual cameras, processing the shooting tasks and the number of cameras based on the preset recognition model, and outputting a lens-transporting mode of each virtual camera so that the virtual cameras shoot according to the corresponding lens-transporting mode, wherein the preset recognition model is constructed based on sample data;
the processing unit is used for determining a datum line based on the shooting picture size aiming at each virtual camera, wherein the shooting picture size is determined through shooting tasks carried in shooting instructions of the virtual cameras; tracking character marking points of a shot picture, wherein the marking points are determined based on the mirror mode;
and the adjusting unit is used for determining that the shooting angle needs to be adjusted based on the change of the character mark point and the datum line, and controlling the virtual camera to adjust the shooting angle according to the movement of the character mark point.
10. A storage medium comprising a stored program, wherein the program, when run, controls a device in which the storage medium is located to perform the virtual camera shooting method according to any one of claims 1 to 8.
CN202311736198.5A 2023-12-18 2023-12-18 Shooting method and system for virtual camera Active CN117425076B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311736198.5A CN117425076B (en) 2023-12-18 2023-12-18 Shooting method and system for virtual camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311736198.5A CN117425076B (en) 2023-12-18 2023-12-18 Shooting method and system for virtual camera

Publications (2)

Publication Number Publication Date
CN117425076A CN117425076A (en) 2024-01-19
CN117425076B true CN117425076B (en) 2024-02-20

Family

ID=89531114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311736198.5A Active CN117425076B (en) 2023-12-18 2023-12-18 Shooting method and system for virtual camera

Country Status (1)

Country Link
CN (1) CN117425076B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111447340A (en) * 2020-05-29 2020-07-24 深圳市瑞立视多媒体科技有限公司 Mixed reality virtual preview shooting system
JP2020167499A (en) * 2019-03-29 2020-10-08 株式会社 零Space Photographing support device
CN115294212A (en) * 2022-08-04 2022-11-04 广州博冠信息科技有限公司 Virtual mirror running sequence generation method and device, electronic equipment and storage medium
WO2023174009A1 (en) * 2022-03-17 2023-09-21 北京字跳网络技术有限公司 Photographic processing method and apparatus based on virtual reality, and electronic device
CN116966557A (en) * 2022-04-19 2023-10-31 网易(杭州)网络有限公司 Game video stream sharing method and device, storage medium and electronic equipment
CN117061857A (en) * 2023-08-25 2023-11-14 深圳互酷科技有限公司 Unmanned aerial vehicle automatic shooting method and device, unmanned aerial vehicle and medium
CN117119294A (en) * 2023-08-24 2023-11-24 腾讯科技(深圳)有限公司 Shooting method, device, equipment, medium and program of virtual scene

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108306965B (en) * 2018-01-31 2021-02-02 上海小蚁科技有限公司 Data processing method and device for camera, storage medium and camera
JP2022060900A (en) * 2020-10-05 2022-04-15 キヤノン株式会社 Control device and learning device and control method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020167499A (en) * 2019-03-29 2020-10-08 株式会社 零Space Photographing support device
CN111447340A (en) * 2020-05-29 2020-07-24 深圳市瑞立视多媒体科技有限公司 Mixed reality virtual preview shooting system
WO2023174009A1 (en) * 2022-03-17 2023-09-21 北京字跳网络技术有限公司 Photographic processing method and apparatus based on virtual reality, and electronic device
CN116966557A (en) * 2022-04-19 2023-10-31 网易(杭州)网络有限公司 Game video stream sharing method and device, storage medium and electronic equipment
CN115294212A (en) * 2022-08-04 2022-11-04 广州博冠信息科技有限公司 Virtual mirror running sequence generation method and device, electronic equipment and storage medium
CN117119294A (en) * 2023-08-24 2023-11-24 腾讯科技(深圳)有限公司 Shooting method, device, equipment, medium and program of virtual scene
CN117061857A (en) * 2023-08-25 2023-11-14 深圳互酷科技有限公司 Unmanned aerial vehicle automatic shooting method and device, unmanned aerial vehicle and medium

Also Published As

Publication number Publication date
CN117425076A (en) 2024-01-19

Similar Documents

Publication Publication Date Title
JP5178553B2 (en) Imaging device
KR101503333B1 (en) Image capturing apparatus and control method thereof
US8300136B2 (en) Imaging apparatus for detecting a face image and imaging method
CN103081455B (en) The multiple images being captured from handheld device carry out portrait images synthesis
JP6101397B2 (en) Photo output method and apparatus
US7973833B2 (en) System for and method of taking image and computer program
CN110542975B (en) Zoom control device, image pickup apparatus, and control method of zoom control device
US20140092272A1 (en) Apparatus and method for capturing multi-focus image using continuous auto focus
JP5016909B2 (en) Imaging device
EP2139226A1 (en) Image recording apparatus, image recording method, image processing apparatus, image processing method, and program
CN109379537A (en) Slide Zoom effect implementation method, device, electronic equipment and computer readable storage medium
CN103534726A (en) Positional sensor-assisted image registration for panoramic photography
CN101010942A (en) Capturing a sequence of images
CN108702456A (en) A kind of focusing method, equipment and readable storage medium storing program for executing
JP6516434B2 (en) Image processing apparatus, imaging apparatus, image processing method
WO2021134179A1 (en) Focusing method and apparatus, photographing device, movable platform and storage medium
CN108600610A (en) Shoot householder method and device
CN112995507A (en) Method and device for prompting object position
CN110278366B (en) Panoramic image blurring method, terminal and computer readable storage medium
JP7158841B2 (en) Imaging device, imaging method, program, recording medium, and image processing device
JP2020107956A (en) Imaging apparatus, imaging method, and program
CN117425076B (en) Shooting method and system for virtual camera
CN105467741A (en) Panoramic shooting method and terminal
CN110930437B (en) Target tracking method and device
CN114788254A (en) Auxiliary focusing method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant