CN114422697B - Virtual shooting method, system and storage medium based on optical capturing - Google Patents

Virtual shooting method, system and storage medium based on optical capturing Download PDF

Info

Publication number
CN114422697B
CN114422697B CN202210060188.3A CN202210060188A CN114422697B CN 114422697 B CN114422697 B CN 114422697B CN 202210060188 A CN202210060188 A CN 202210060188A CN 114422697 B CN114422697 B CN 114422697B
Authority
CN
China
Prior art keywords
virtual
face data
face
data
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210060188.3A
Other languages
Chinese (zh)
Other versions
CN114422697A (en
Inventor
王玉珏
李炼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Versatile Media Co ltd
Original Assignee
Zhejiang Versatile Media Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Versatile Media Co ltd filed Critical Zhejiang Versatile Media Co ltd
Priority to CN202210060188.3A priority Critical patent/CN114422697B/en
Publication of CN114422697A publication Critical patent/CN114422697A/en
Application granted granted Critical
Publication of CN114422697B publication Critical patent/CN114422697B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a virtual shooting method, a virtual shooting system and a storage medium based on optical capturing, which relate to the technical field of virtual production, and are characterized in that first position information determined by an optical capturing intelligent terminal is acquired, virtual shooting is carried out according to second position information generated by the first position information, and finally a picture shot by a virtual camera is pushed to the intelligent terminal. Compared with the prior art that multiple links are separated and finally spliced, the virtual shooting method based on optical capturing has the advantage that shooting pictures can be seen in real time like traditional shooting, so that directors and photographers can see pictures shot by cameras in real time, and operation and interaction of real cameras are simulated.

Description

Virtual shooting method, system and storage medium based on optical capturing
Technical Field
The present invention relates to the field of virtual production, and in particular, to a virtual shooting method, system and storage medium based on optical capturing.
Background
Virtual shooting refers to that in movie shooting, all shots are performed in a virtual scene in a computer according to shooting actions required by a director. The various elements required to take this shot, including scenes, figures, lights, etc., are all integrated into a computer, and then the director can "command" the performance and action of the character on the computer, moving his shot from any angle, according to his own intent.
However, the conventional virtual photographing method has the following problems: on the one hand, as most of the transformation of the position and the parameters of the virtual camera is an animation curve manually made by a producer, and is not a real-time shooting result in the shooting process and is not real-time, many virtual shooting is not real-time shooting, and is separated from various links and finally split, so that a shooting picture can not be seen in real time as in the traditional shooting; on the other hand, many data are mostly made manually, which seriously affects the progress of virtual shooting and the accuracy of simulation.
Disclosure of Invention
The invention aims to solve the problems in the background art and provides a virtual shooting method, a virtual shooting system and a storage medium based on optical capturing.
In order to achieve the above objective, the present invention firstly proposes a virtual shooting method based on optical capturing, comprising the following steps: acquiring first position information determined by an optical capturing intelligent terminal; generating second position information according to the first position information; performing virtual photographing according to the second position information; and pushing the picture shot by the virtual camera to the intelligent terminal.
Optionally, the method further comprises the following steps: acquiring first face data; generating second face data by carrying out matching processing on the first face data; and taking the second face data as face target data of a preset character model, and generating a facial expression corresponding to the preset character model.
Optionally, the generating the second face data by performing matching processing on the first face data includes the following steps: taking the first face data in the state of no expression as an initial value, subtracting the initial value from the first face data of each other frame to initialize the face; and multiplying the first face data initialized by the face by a preset coefficient to perform expression overall scaling.
Optionally, the method further comprises the following steps: forming skeleton data according to the acquired key point position data; binding the bone data with a bone model; the virtual character is driven in real time according to the bound bone data.
Optionally, the intelligent terminal is a tablet computer.
The invention also provides a virtual shooting system based on optical capturing, which comprises: the first position information processing module is configured to acquire first position information determined by the optical capturing intelligent terminal; a second location information processing module configured to generate second location information from the first location information; a virtual photographing module configured to perform virtual photographing using the second position information as a position of a virtual camera; and the picture pushing module is configured to push the picture shot by the virtual camera to the intelligent terminal.
Optionally, the method further comprises: the first face data processing module is configured to acquire first face data; the second face data processing module is configured to generate second face data by carrying out matching processing on the first face data; and the facial expression generating module is configured to take the second facial data as the facial target data of the preset character model so as to generate the facial expression corresponding to the preset character model.
Optionally, the second face data processing module further includes a face initialization module configured to perform face initialization by subtracting the initial value from the first face data of each other frame, using the first face data in the non-expressive state as the initial value; the expression overall scaling module is configured to multiply the face initialized first face data by a preset coefficient to perform expression overall scaling
The present invention also proposes a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the above-described virtual photographing method based on optical capturing.
The invention has the beneficial effects that:
in the virtual shooting method based on optical capturing, first position information determined by an optical capturing intelligent terminal is acquired, virtual shooting is performed according to second position information generated by the first position information, and finally a picture shot by a virtual camera is pushed to the intelligent terminal, so that the technical problem that in the prior art, the transformation of the position and parameters of the virtual camera is performed through an animation curve manually made by a producer and is not real-time is solved. Compared with the prior art that multiple links are separated and finally spliced, the virtual shooting method based on optical capturing has the advantage that shooting pictures can be seen in real time like traditional shooting, so that directors and photographers can see pictures shot by cameras in real time, and operation and interaction of a real camera can be simulated.
The features and advantages of the present invention will be described in detail by way of example with reference to the accompanying drawings.
Drawings
FIG. 1 is a schematic flow chart of a virtual shooting method based on optical capturing according to an embodiment of the present invention;
FIG. 2 is a second flow chart of a virtual shooting method based on optical capturing according to the embodiment of the invention;
FIG. 3 is a third flow chart of a virtual shooting method based on optical capturing according to the embodiment of the present invention;
FIG. 4 is a block diagram of a virtual camera system based on optical capturing according to an embodiment of the present invention;
FIG. 5 is a second block diagram of a virtual camera system based on optical capturing according to an embodiment of the present invention;
fig. 6 is a third block diagram of a virtual shooting system based on optical capturing according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples for the purpose of facilitating understanding to those skilled in the art.
Fig. 1 schematically illustrates a flow chart of a virtual shooting method based on optical capturing according to an embodiment of the present invention.
As shown in fig. 1, the virtual photographing method based on optical capturing includes steps S10 to S40:
step S10, acquiring first position information determined by an optical capturing intelligent terminal;
step S20, generating second position information according to the first position information;
step S30, virtual shooting is carried out according to the second position information;
and step S40, pushing the picture shot by the virtual camera to the intelligent terminal.
In the virtual shooting method based on optical capturing, the first position information determined by the optical capturing intelligent terminal is acquired first, then virtual shooting is carried out according to the second position information generated by the first position information, and finally the picture shot by the virtual camera is pushed to the intelligent terminal, so that the technical problem that in the prior art, the transformation of the position and the parameters of the virtual camera is carried out through an animation curve manually made by a producer and the real-time performance is not achieved is solved. Compared with the prior art that multiple links are separated and finally spliced, the virtual shooting method based on optical capturing has the advantage that shooting pictures can be seen in real time like traditional shooting, so that directors and photographers can see pictures shot by cameras in real time, and operation and interaction of real cameras are simulated.
Hereinafter, each step of the virtual photographing method based on optical capturing in the embodiment of the present invention will be described in more detail with reference to the accompanying drawings and embodiments.
Step S10, first position information determined by the optical capturing intelligent terminal is acquired.
The intelligent terminal having a photographing function is provided with the following two conditions as a device representing a virtual camera in the real world: firstly, the plug flow of the shot picture in the virtual camera can be received, and secondly, the corresponding interaction function can be carried out on the virtual camera. In a preferred embodiment, a tablet computer is selected as the smart terminal, and in other embodiments, other devices, such as a mobile phone, may be selected as the smart terminal. In addition, the process of optically capturing the smart terminal may be performed by using an existing optical capturing technology, which is not described herein.
And step S20, generating second position information according to the first position information.
The purpose of this step is to unify the consistency of the real and virtual coordinates to ensure the consistency of the motion of the intelligent terminal representing the real camera in the real world and the motion of the virtual camera in the virtual world.
Specifically, after receiving the unprocessed first position information representing the position of the intelligent terminal in the real world, the first position information is mapped to a virtual light capturing field coordinate system to generate second position information.
It should be noted that, in the process of optically capturing the smart terminal, an effective range area of optical capturing in the real world is called an optical capturing field, and this area limits a movement range of an object requiring optical capturing, such as a virtual camera. Because the positive directions of the X axis and the Y axis in the light capturing field are not consistent with the positive directions of the X axis and the Y axis in the virtual environment, the corresponding position information needs to be converted when being transmitted in the light capturing field and the virtual environment so as to ensure the consistency of movement. In this embodiment, the first position information is mapped to the virtual light capturing field coordinate system to generate the second position information, so that the complexity of performing multiple coordinate system conversions is omitted.
And step S30, performing virtual shooting according to the second position information.
Because the second position information and the first position information synchronously change, the virtual camera can change the position or the orientation in the virtual scene according to the position change of the real camera so as to perform real-time virtual shooting.
And step S40, pushing the picture shot by the virtual camera to the intelligent terminal.
And pushing the picture shot by the virtual camera to the intelligent terminal, so that the virtual shot picture is displayed on the intelligent terminal in real time. Compared with the prior art that multiple links are separated and finally spliced, the embodiment has the advantage that a shooting picture can be seen in real time like traditional shooting.
Referring to fig. 2, the virtual shooting method based on optical capturing according to the embodiment of the invention further includes the following steps:
step S11, acquiring first face data; and acquiring the target face through the optical camera and/or the depth camera to obtain first face data.
Step S21, generating second face data by carrying out matching processing on the first face data, so as to reduce the difference of the same virtual character generated by different people.
Step S31, the second face data is used as face target data of a preset character model, and a facial expression corresponding to the preset character model is generated.
Referring to fig. 3, the generating the second face data by performing matching processing on the first face data specifically includes the following steps:
step S2110, the first face data in the state of no expression is used as an initial value, and the initial value is subtracted from the first face data of each other frame to initialize the face;
in step S2120, the first face data initialized by the face is multiplied by a preset coefficient to perform the overall scaling of the expression.
According to the result obtained by the actual test, according to the obtained first face data of different actors, for the same expression action, for example, mouth opening, some actors may be numerical values ranging from 0.3 to 0.7, and some actors may be numerical values ranging from 0.2 to 0.9. Through the steps, under the condition that the acquired first face data of different actors are large or small in whole, the similarity of the same virtual character performed by different actors is adjusted, and the expression details of the virtual character are supplemented, so that the technical effect that the generated virtual characters are not greatly different when the actors are replaced is achieved.
In addition, the embodiment also has the function of dynamically capturing the human body and enabling the virtual character to synchronously move along with the real human body. Specifically, the above functions are realized by the following steps: forming skeleton data according to the acquired key point position data; binding the bone data with a bone model; the virtual character is driven in real time according to the bound bone data.
Based on the above-mentioned virtual shooting method based on optical capturing, the embodiment of the invention also provides a virtual shooting system based on optical capturing, as shown in fig. 4, the system comprises the following modules:
a first location information processing module 100 configured to acquire first location information determined by the optical capturing smart terminal;
a second location information processing module 200 configured to generate second location information from the first location information;
a virtual photographing module 300 configured to perform virtual photographing using the second position information as a position of a virtual camera;
the frame pushing module 400 is configured to push the frame shot by the virtual camera to the intelligent terminal.
As shown in fig. 5, in an embodiment, a virtual shooting system based on optical capturing further includes:
a first face data processing module 110 configured to acquire first face data;
a second face data processing module 210 configured to generate second face data by performing a matching process on the first face data;
the facial expression generating module 310 is configured to use the second facial data as facial target data of a preset character model to generate a facial expression corresponding to the preset character model.
As shown in fig. 6, in an embodiment, the second face data processing module further includes:
the face initialization module 21100 is configured to perform face initialization by subtracting the initial value from the first face data of each other frame, wherein the initial value is the first face data in the non-expressive state;
the expression overall scaling module 21200 is configured to multiply the face initialized first face data by a preset coefficient to perform expression overall scaling.
In summary, the virtual photographing system based on optical capturing according to the embodiments of the present invention may be implemented as a program, and run on a computer device. The memory of the computer device may store various program modules constituting the optical capturing-based virtual photographing system, such as the first location information processing module 100, the second location information processing module 200, the virtual photographing module 300, and the screen pushing module 400 shown in fig. 4. The program of each program module causes a processor to execute steps in a virtual photographing method based on optical capturing according to each embodiment of the present application described in the present specification.
Embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements steps in an optical capture-based virtual photographing method of various embodiments of the present application.
The technical features of the foregoing embodiments may be arbitrarily combined, and for brevity, all of the possible combinations of the technical features of the foregoing embodiments are not described, however, all of the combinations of the technical features should be considered as being within the scope of the disclosure.
The above embodiments are illustrative of the present invention, and not limiting, and any simple modifications of the present invention fall within the scope of the present invention. The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to the present invention may occur to one skilled in the art without departing from the principles of the present invention and are intended to be within the scope of the present invention.

Claims (5)

1. The virtual shooting method based on optical capturing is characterized by comprising the following steps of:
acquiring first position information determined by an optical capturing intelligent terminal;
generating second position information according to the first position information;
performing virtual photographing according to the second position information;
pushing the picture shot by the virtual camera to the intelligent terminal;
further comprises: acquiring first face data;
generating second face data by carrying out matching processing on the first face data;
taking the second face data as face target data of a preset character model, and generating a facial expression corresponding to the preset character model;
wherein the generating the second face data by performing the matching processing on the first face data includes:
taking the first face data in the state of no expression as an initial value, subtracting the initial value from the first face data of each other frame to initialize the face;
and multiplying the first face data initialized by the face by a preset coefficient to perform expression overall scaling, wherein the preset coefficient represents the degree of different actors when making the same expression.
2. The virtual photographing method based on optical capturing as claimed in claim 1, further comprising the steps of:
forming skeleton data according to the acquired key point position data;
binding the bone data with a bone model;
the virtual character is driven in real time according to the bound bone data.
3. The virtual shooting method based on optical capturing of claim 1, wherein the intelligent terminal is a tablet computer.
4. A virtual photographing system based on optical capturing, comprising:
the first position information processing module is configured to acquire first position information determined by the optical capturing intelligent terminal;
a second location information processing module configured to generate second location information from the first location information;
a virtual photographing module configured to perform virtual photographing using the second position information as a position of a virtual camera;
the picture pushing module is configured to push pictures shot by the virtual camera to the intelligent terminal;
further comprises: the first face data processing module is configured to acquire first face data;
the second face data processing module is configured to generate second face data by carrying out matching processing on the first face data;
a facial expression generating module configured to take the second facial data as facial target volume data of a preset character model to generate a facial expression corresponding to the preset character model;
the second face data processing module further includes:
the face initialization module is configured to take the first face data in the non-expression state as an initial value, and the initial value is subtracted from the first face data of each other frame to perform face initialization;
the expression overall scaling module is configured to multiply the face initialized first face data by a preset coefficient to perform expression overall scaling, wherein the preset coefficient represents the degree of different actors doing the same expression.
5. A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, implements the optical capture-based virtual photographing method of any one of claims 1 to 3.
CN202210060188.3A 2022-01-19 2022-01-19 Virtual shooting method, system and storage medium based on optical capturing Active CN114422697B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210060188.3A CN114422697B (en) 2022-01-19 2022-01-19 Virtual shooting method, system and storage medium based on optical capturing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210060188.3A CN114422697B (en) 2022-01-19 2022-01-19 Virtual shooting method, system and storage medium based on optical capturing

Publications (2)

Publication Number Publication Date
CN114422697A CN114422697A (en) 2022-04-29
CN114422697B true CN114422697B (en) 2023-07-18

Family

ID=81274482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210060188.3A Active CN114422697B (en) 2022-01-19 2022-01-19 Virtual shooting method, system and storage medium based on optical capturing

Country Status (1)

Country Link
CN (1) CN114422697B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781610A (en) * 2021-06-28 2021-12-10 武汉大学 Virtual face generation method
WO2022005300A1 (en) * 2020-07-02 2022-01-06 Weta Digital Limited Generating an animation rig for use in animating a computer-generated character based on facial scans of an actor and a muscle model
CN113905145A (en) * 2021-10-11 2022-01-07 浙江博采传媒有限公司 LED circular screen virtual-real camera focus matching method and system

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2161871C2 (en) * 1998-03-20 2001-01-10 Латыпов Нурахмед Нурисламович Method and device for producing video programs
US9729765B2 (en) * 2013-06-19 2017-08-08 Drexel University Mobile virtual cinematography system
CN106027855B (en) * 2016-05-16 2019-06-25 深圳迪乐普数码科技有限公司 A kind of implementation method and terminal of virtual rocker arm
CN207234975U (en) * 2017-02-09 2018-04-13 量子动力(深圳)计算机科技有限公司 The system of seizure performer expression based on virtual image technology
US20200090392A1 (en) * 2018-09-19 2020-03-19 XRSpace CO., LTD. Method of Facial Expression Generation with Data Fusion
CN109859297B (en) * 2019-03-07 2023-04-18 灵然创智(天津)动画科技发展有限公司 Mark point-free face capturing device and method
US11069135B2 (en) * 2019-03-07 2021-07-20 Lucasfilm Entertainment Company Ltd. On-set facial performance capture and transfer to a three-dimensional computer-generated model
CN112040092B (en) * 2020-09-08 2021-05-07 杭州时光坐标影视传媒股份有限公司 Real-time virtual scene LED shooting system and method
CN113537056A (en) * 2021-07-15 2021-10-22 广州虎牙科技有限公司 Avatar driving method, apparatus, device, and medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022005300A1 (en) * 2020-07-02 2022-01-06 Weta Digital Limited Generating an animation rig for use in animating a computer-generated character based on facial scans of an actor and a muscle model
CN113781610A (en) * 2021-06-28 2021-12-10 武汉大学 Virtual face generation method
CN113905145A (en) * 2021-10-11 2022-01-07 浙江博采传媒有限公司 LED circular screen virtual-real camera focus matching method and system

Also Published As

Publication number Publication date
CN114422697A (en) 2022-04-29

Similar Documents

Publication Publication Date Title
CN108986189B (en) Method and system for capturing and live broadcasting of real-time multi-person actions based on three-dimensional animation
CN112819944B (en) Three-dimensional human body model reconstruction method and device, electronic equipment and storage medium
CN112884881B (en) Three-dimensional face model reconstruction method and device, electronic equipment and storage medium
WO2021030002A1 (en) Depth-aware photo editing
US20130101164A1 (en) Method of real-time cropping of a real entity recorded in a video sequence
CN106161939B (en) Photo shooting method and terminal
CN111080759B (en) Method and device for realizing split mirror effect and related product
CN106027855B (en) A kind of implementation method and terminal of virtual rocker arm
CN109840946B (en) Virtual object display method and device
CN112308977B (en) Video processing method, video processing device, and storage medium
CN114445562A (en) Three-dimensional reconstruction method and device, electronic device and storage medium
CN114387445A (en) Object key point identification method and device, electronic equipment and storage medium
TW202145065A (en) Image processing method, electronic device and computer-readable storage medium
CN109413152A (en) Image processing method, device, storage medium and electronic equipment
CN114422696A (en) Virtual shooting method and device and storage medium
WO2023217138A1 (en) Parameter configuration method and apparatus, device, storage medium and product
CN112511815B (en) Image or video generation method and device
CN114511596A (en) Data processing method and related equipment
CN114422697B (en) Virtual shooting method, system and storage medium based on optical capturing
WO2024031882A1 (en) Video processing method and apparatus, and computer readable storage medium
CN115278084A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114241127A (en) Panoramic image generation method and device, electronic equipment and medium
CN114429484A (en) Image processing method and device, intelligent equipment and storage medium
CN114286004A (en) Focusing method, shooting device, electronic equipment and medium
CN113723168A (en) Artificial intelligence-based subject identification method, related device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant