CN108668050B - Video shooting method and device based on virtual reality - Google Patents

Video shooting method and device based on virtual reality Download PDF

Info

Publication number
CN108668050B
CN108668050B CN201710210901.7A CN201710210901A CN108668050B CN 108668050 B CN108668050 B CN 108668050B CN 201710210901 A CN201710210901 A CN 201710210901A CN 108668050 B CN108668050 B CN 108668050B
Authority
CN
China
Prior art keywords
information
user
prop model
state
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710210901.7A
Other languages
Chinese (zh)
Other versions
CN108668050A (en
Inventor
李炜
胡治国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inlife Handnet Co Ltd
Original Assignee
Inlife Handnet Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inlife Handnet Co Ltd filed Critical Inlife Handnet Co Ltd
Priority to CN201710210901.7A priority Critical patent/CN108668050B/en
Publication of CN108668050A publication Critical patent/CN108668050A/en
Application granted granted Critical
Publication of CN108668050B publication Critical patent/CN108668050B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a video shooting method and device based on virtual reality. The video shooting method comprises the steps of displaying a prop model to a user, then obtaining interaction information of the user and the prop model, obtaining a corresponding virtual scene based on the interaction information, and then fusing the interaction information and the corresponding virtual scene in real time to output a target video. According to the scheme, the pre-constructed prop model is displayed to the user in a virtual reality mode, so that the degree of engagement among the interactive information of the user, the prop model and the virtual scene is improved, and the shooting efficiency is improved.

Description

Video shooting method and device based on virtual reality
Technical Field
The invention relates to the technical field of virtual reality, in particular to a video shooting method and device based on virtual reality.
Background
VR (Virtual Reality, also called smart environment technology or artificial environment) is a comprehensive integration technology, and relates to the fields of computer graphics, man-machine interaction technology, sensing technology, artificial intelligence, and the like. The three-dimensional vivid virtual environment is generated by utilizing a three-dimensional graph generation technology, a multi-sensing interaction technology and a high-resolution display technology. The user can feel the real sense of sight, hearing, smell and the like by wearing special sensing equipment such as a helmet, a data glove and the like. Or the virtual space can be accessed by using input equipment such as a keyboard, a mouse and the like to become one of the virtual environments, real-time interaction is carried out, and various objects in the virtual world are sensed and operated, so that the experience and experience of being personally on the scene are obtained. VR is a brand new way for people to perform visualization operation and interaction on complex data through a computer, and compared with the traditional human-computer interface and popular window operation, VR has a qualitative leap in technical idea.
Green screen photography has become a part of much interest in the film and television industry. The traditional green screen shooting adopts a green screen as a background to shoot actors or other subjects, then the green screen background is scratched, and finally corresponding special effect synthesis is added in the later period. However, the current shooting method easily causes that the spatial fit between the special effect added in the later stage and the entity shot in the earlier stage is not high, and even the difference is too large, shooting needs to be carried out again, so that the shooting efficiency is reduced, and waste of manpower and material resources is caused.
Disclosure of Invention
The embodiment of the invention provides a video shooting method and device based on virtual reality, which can improve the shooting efficiency of videos.
The embodiment of the invention provides a video shooting method based on virtual reality, which comprises the following steps:
displaying the prop model to a user;
acquiring interaction information of the user and the prop model;
acquiring a corresponding virtual scene based on the interaction information;
and fusing the interaction information and the corresponding virtual scene in real time to output a target video.
In some embodiments, the step of presenting the item model to the user comprises:
establishing a state database of the prop model;
displaying the original state of the prop model to a user;
generating a corresponding coping state in the state database based on the limb action of the user;
and displaying the corresponding state of the prop model to the user.
In some embodiments, the step of obtaining the interaction information of the user and the prop model comprises:
acquiring the body movement, expression and language information of the user;
and reading the corresponding state of the prop model in the state database according to the body action, expression and language information of the user to generate corresponding information.
In some embodiments, the step of obtaining the corresponding virtual scene based on the interaction information includes:
establishing a virtual scene database;
acquiring a virtual position of the prop model in the interactive information;
and generating a corresponding virtual scene in the virtual scene database according to the virtual position.
In some embodiments, the step of fusing the interaction information with the corresponding virtual scene in real time to output the target video includes:
the interactive information is subjected to imaging to obtain an interactive image of the user and the prop model;
and fusing the interactive picture with the corresponding virtual scene in real time to output a target video.
Correspondingly, the embodiment of the invention provides a video shooting device based on virtual reality, which comprises:
the display module is used for displaying the prop model to a user;
the information acquisition module is used for acquiring the interaction information of the user and the prop model;
the scene acquisition module is used for acquiring a corresponding virtual scene based on the interaction information;
and the fusion module is used for fusing the interaction information and the corresponding virtual scene in real time so as to output the target video.
In some embodiments, the display module comprises:
the state establishing unit is used for establishing a state database of the prop model;
the first display unit is used for displaying the original state of the prop model to a user;
a state acquisition unit, configured to generate a corresponding response state in the state database based on the body motion of the user;
and the second display unit is used for displaying the corresponding state of the prop model to the user.
In some embodiments, the information acquisition module comprises:
the information acquisition unit is used for acquiring the body action, expression and language information of the user;
and the information generation unit is used for reading the corresponding state of the prop model from the state database according to the limb action, expression and language information of the user so as to generate corresponding information.
In some embodiments, the scene acquisition module comprises:
the scene establishing unit is used for establishing a virtual scene database;
the position acquisition unit is used for acquiring the virtual position of the prop model in the interaction information;
and the scene generation unit is used for generating a corresponding virtual scene in the virtual scene database according to the virtual position.
In some embodiments, the fusion module comprises:
the conversion unit is used for rendering the interactive information to obtain an interactive picture of the user and the prop model;
and the fusion unit is used for fusing the interactive picture and the corresponding virtual scene in real time so as to output the target video.
According to the video shooting method based on the virtual reality, provided by the embodiment of the invention, the property model is displayed for the user, then the interaction information between the user and the property model is obtained, the corresponding virtual scene is obtained based on the interaction information, and then the interaction information and the corresponding virtual scene are fused in real time to output the target video. According to the scheme, the pre-constructed prop model is displayed to the user in a virtual reality mode, so that the degree of engagement among the interactive information of the user, the prop model and the virtual scene is improved, and the shooting efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a video shooting method based on virtual reality according to an embodiment of the present invention.
Fig. 2 is another schematic flow chart of a virtual reality-based video shooting method according to an embodiment of the present invention.
Fig. 3 is a schematic view of an application scenario of a virtual reality-based video shooting system according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a virtual reality-based video camera according to an embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a virtual reality-based video camera according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the objects so described are interchangeable under appropriate circumstances. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions.
In this patent document, the drawings discussed below and the embodiments used to describe the principles of the present disclosure are by way of illustration only and should not be construed in any way to limit the scope of the present disclosure. Those skilled in the art will understand that the principles of the present invention may be implemented in any suitably arranged system. Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. Further, a terminal according to an exemplary embodiment will be described in detail with reference to the accompanying drawings. Like reference symbols in the various drawings indicate like elements.
The terms used in the description of the present invention are only used to describe specific embodiments, and are not intended to show the concept of the present invention. Unless the context clearly dictates otherwise, expressions used in the singular form encompass expressions in the plural form. In the present specification, it is to be understood that terms such as "comprising," "having," and "containing" are intended to specify the presence of stated features, integers, steps, acts, or combinations thereof, as taught in the present specification, and are not intended to preclude the presence or addition of one or more other features, integers, steps, acts, or combinations thereof. Like reference symbols in the various drawings indicate like elements.
The embodiment of the invention provides a video shooting method and device based on virtual reality. The details will be described below separately.
In a preferred embodiment, a video shooting method based on virtual reality is provided, as shown in fig. 1, the process may be as follows:
101. and displaying the prop model to the user.
Specifically, the prop model may be a three-dimensional model of a virtual prop constructed in advance, for example, a three-dimensional model of a monster in a science fiction film, a three-dimensional model of a weapon, and the like.
Wherein, the prop model can be stored in a corresponding storage area of the terminal or the server.
In the specific implementation process, the user can wear virtual reality glasses to show the prop model to the user.
102. And acquiring the interaction information of the user and the prop model.
In the embodiment of the invention, the interaction information refers to action information, expression information, form information, language information, position information and other information which are respectively displayed when the user interacts with the prop model.
103. And acquiring a corresponding virtual scene based on the interaction information.
In some embodiments, the required information may be extracted from the interaction information, and the corresponding virtual scene may be obtained according to the required information. For example, a mapping relationship between the interaction information and the virtual scene may be pre-established, and the interaction information, the virtual scene, and the mapping relationship may be stored to obtain a mapping relationship set. And then, acquiring a target virtual scene corresponding to the target interaction information from the mapping relation set according to the target interaction information and the mapping relation.
104. And fusing the interactive information and the corresponding virtual scene in real time to output a target video.
In practical application, interactive information and a virtual scene can be matched in real time to obtain a plurality of frames of three-dimensional images, and then the obtained plurality of frames of three-dimensional images are made into a video file so as to fuse the interactive information and the corresponding virtual scene to obtain a target video and realize video shooting.
Therefore, the embodiment of the invention provides a video shooting method based on virtual reality, which includes displaying a prop model to a user, then acquiring interaction information between the user and the prop model, acquiring a corresponding virtual scene based on the interaction information, and then fusing the interaction information and the corresponding virtual scene in real time to output a target video. According to the scheme, the pre-constructed prop model is displayed to the user in a virtual reality mode, so that the degree of engagement among the interactive information of the user, the prop model and the virtual scene is improved, and the shooting efficiency is improved.
In another embodiment of the present invention, another virtual reality-based video capture method is also provided. As shown in fig. 2, the process may be as follows:
201. and establishing a state database of the prop model.
In this embodiment, the status database may be stored in the terminal device or the server. In a specific implementation process, in order to improve the data reading speed and facilitate the data calling, the state database may be stored locally in the terminal.
The prop model may be a virtual prop three-dimensional model constructed in advance, for example, a monster three-dimensional model in science fiction films, a weapon three-dimensional model, or the like.
202. And displaying the original state of the prop model to the user.
Specifically, the original state is also the initial model state. In this embodiment, a stable state set by a model designer is specified in the case that the prop model does not receive any operation instruction.
203. Based on the limb movement of the user, a corresponding response state is generated in the state database.
Specifically, strength information generated by the limb movement (including the movement of legs, hands, head, body, and the like) of the user, position information of the user relative to the prop model, angle information and position information of the limb of the user relative to a specific part of the prop model, and the like can be acquired. And then, analyzing the acquired information to obtain corresponding various parameter information. And processing the parameter information through a correlation algorithm to obtain various parameter information corresponding to the prop model, generating a corresponding state according to the obtained parameter information, and storing the corresponding state in a state database.
In practical application, the limb movement of the user can be acquired through a series of virtual reality devices. For example, various parameters of the body of the user can be acquired through equipment such as a data garment, a data glove and a data bracelet worn by the user, and the limb movement of the user is determined according to the parameters.
204. And displaying the corresponding state of the prop model to the user.
And displaying the corresponding state of the prop model to the user in real time according to the limb action of the user. For example, the user's left hand strikes a fist in a direction opposite to the head of the prop model, and the prop model correspondingly moves back several steps and falls to the ground.
205. And acquiring the body movement, expression and language information of the user.
Specifically, in the process of interaction between the user and the prop model, the body movement, the expression, the language information and the like of the user can be acquired.
206. And reading the corresponding state of the prop model from the state database to generate corresponding information.
In some embodiments, the coping state of the prop model may be correspondingly varied, for example, the coping state may include limb movement, expression, language information, and the like.
In the specific implementation process, the limb actions of the prop model can be obtained according to one or more of the limb actions, the expressions and the language information of the user. And acquiring the expression of the prop model according to one or more of the limb action, the expression and the language information of the user. And acquiring the language information of the prop model according to one or more of the limb action, the expression and the language information of the user. And generating corresponding handling information according to the acquired handling state.
207. And acquiring a corresponding virtual scene based on the interaction information between the user and the prop model.
In this embodiment, the interactive information may include the limb movement, expression and language information of the user, and the response information of the prop model. In this embodiment, the manner of obtaining the virtual scene may be various. Alternatively, the corresponding virtual scene may be determined according to the position information of the road tool model in the virtual space. That is, the step of "obtaining the corresponding virtual scene based on the interaction information between the user and the prop model" may include the following processes:
establishing a virtual scene database;
acquiring a virtual position of the road model in the interactive information;
and generating a corresponding virtual scene in the virtual scene database according to the virtual position.
Specifically, the virtual scene database may be established in a storage area of the terminal device, so as to facilitate data calling and improve data reading speed.
For example, when the property model moves from the leftmost side to the rightmost side of the scene, the scene on the left side of the virtual scene disappears along with the motion trajectory of the property model, and the scene on the right side is added.
208. And (4) rendering the interactive information to obtain an interactive picture of the user and the prop model.
Specifically, the limb movement, expression and language information of the user and the corresponding information of the prop model can be matched and synthesized by taking time as a reference, so as to obtain a frame of interactive three-dimensional image between the user and the prop model, which changes along with time.
209. And fusing the interactive picture with the corresponding virtual scene in real time to output a target video.
In the invention, similarly, the interactive three-dimensional image between the user and the prop model corresponding to a certain time point and the virtual scene corresponding to the time point are obtained by taking time as a reference. And then, carrying out image synthesis processing on the interactive three-dimensional image and the virtual scene to obtain a new three-dimensional image. And by analogy, synthesizing the three-dimensional images of the points among the points to obtain the interactive information and the multi-frame three-dimensional image of the virtual scene. And then, the obtained multi-frame three-dimensional images are made into video files according to a time sequence, so that the interaction information and the corresponding virtual scene are fused to obtain a target video, and video shooting is realized.
As can be seen from the above, the embodiment of the present invention provides a video shooting method based on virtual reality, which displays an original state of a prop model to a user by establishing a state database of the prop model, then generates a corresponding response state in the state database based on a body motion of the user, and displays the response state of the prop model to the user. And further acquiring the limb action, expression and language information of the user, and reading the corresponding state of the prop model from the state database to generate corresponding information. And generating a corresponding virtual scene in a virtual scene database according to the virtual position of the prop model, and finally fusing the limb action, expression and language information of the user, corresponding information corresponding to the prop model and the corresponding virtual scene to output a target video and realize video shooting. According to the scheme, the pre-constructed prop model is displayed to the user in a virtual reality mode, on one hand, the user can interact with the prop model in a virtual space, and the reality of video shooting is improved; on the other hand, the special effect is made firstly, the conformity between the interactive information of the user, the prop model and the virtual scene can be improved, the problem that the special effect added in the later green curtain making and the entity shot in the earlier stage are not high in spatial conformity and need to be shot again is solved, the shooting efficiency is improved, and the manpower and material resources are saved.
Referring to fig. 3, another embodiment of the present invention provides a virtual reality-based video photographing system. As shown, the video photographing system includes: a photographing device 33, a server 34, and a control display device 36.
The photographing device 33 may be a video camera, a still camera, or an electronic device with a photographing function, which can be used to collect image information. For example, the photographing apparatus may be used for user's body movement, expression, and language information, etc.
The server 34 may be a network device such as a data server and a network server. The server 34 may be used to provide a three-dimensional image of the prop model 341 and the virtual scene 342.
The display control device 36 may include a computer, a smart phone, a tablet computer, or other smart devices having an arithmetic processing function.
In yet another embodiment, a further virtual reality based video capture method is provided. The video shooting method will be described in detail below by taking the prop model 341 as a monster model 341 as an example based on the video shooting system.
In this embodiment, the user 31 enters the shooting scene by wearing a virtual reality device 32 (such as a data glove, a data garment, virtual reality glasses, a data shoe, a data bracelet, and the like). The user 31 is then presented with the original state of the monster model 341 provided by the server 34 via a three-dimensional visual display device (e.g., a large projection system CAVE). The user 31 sees the monster model 341 through virtual reality glasses and then interacts with the monster model 341. The body actions of the user are acquired through the data clothes, the data gloves and the like, corresponding parameter information is transmitted to the server 34, and the server 34 calls corresponding responding states of the virtual scene 342 and the monster model 341 to be displayed based on the parameter information.
As shown in fig. 3, user 31 acts to launch a weapon on monster model 341, and server 34 generates a corresponding light wave (i.e., virtual scene 342) based on the body motion of user 31. Monster model 341 will exhibit a knocked down state based on the action of user 31 launching a weapon. The photographing device 33 captures information of the user 31 such as the body movement, the expression, the language, and the like, and transmits the information to the display control terminal 36 in real time for displaying, and the server 34 transmits the virtual scene 342 and the monster model 341 to the display control terminal 36 in real time based on the body movement of the user 31 for displaying, so as to obtain the fused image 35.
In practical applications, in order to improve the reality of the scene, sound devices (such as a three-dimensional sound system and a stereo sound in a non-traditional sense) may be used to configure the virtual special effect and the monster model 341 with corresponding sound effects.
As can be seen from the above, the video shooting method based on virtual reality provided by the embodiment of the invention displays the pre-constructed property model to the user in a virtual reality manner, so as to improve the engagement degree between the interaction information of the user, the property model and the virtual scene, and improve the shooting efficiency.
In another embodiment of the present invention, a virtual reality-based video camera is also provided. As shown in fig. 4, the virtual reality-based video camera may include a presentation module 41, an information acquisition module 42, a scene acquisition module 43, and a fusion module 44, wherein:
a display module 41, configured to display the item model to a user;
an information obtaining module 42, configured to obtain interaction information between the user and the prop model;
a scene obtaining module 43, configured to obtain a corresponding virtual scene based on the interaction information;
and the fusion module 44 is configured to fuse the interaction information with the corresponding virtual scene in real time to output a target video.
Referring to fig. 5, in some embodiments, the presentation module 41 may include a status establishing unit 411, a first presentation unit 412, a status obtaining unit 413, and a second presentation unit 414, where:
a state establishing unit 411, configured to establish a state database of the prop model;
a first display unit 412, configured to display an original state of the prop model to a user;
a status acquisition unit 413 for generating a corresponding response status in the status database based on the body movement of the user;
a second display unit 414, configured to display the response state of the item model to the user.
With continued reference to fig. 5, in some embodiments, the information acquisition module 42 may include an information acquisition unit 421 and an information generation unit 422, wherein:
an information obtaining unit 421, configured to obtain the body movement, expression, and language information of the user;
an information generating unit 422, configured to read the response state of the item model from the state database to generate response information.
With continued reference to fig. 5, in some embodiments, the scene acquisition module 43 may include a scene establishing unit 431, a location acquisition unit 432, and a scene generating unit 433, wherein:
a scene establishing unit 431, configured to establish a virtual scene database;
a position obtaining unit 432, configured to obtain a virtual position where the prop model is located in the interaction information;
a scene generating unit 433, configured to generate a corresponding virtual scene in the virtual scene database according to the virtual position.
With continued reference to fig. 5, in some embodiments, the fusion module 44 may include a conversion unit 441 and a fusion unit 442, wherein:
a conversion unit 441, configured to render the interactive information into a picture to obtain an interactive picture between the user and the prop model;
the fusion unit 442 is configured to fuse the interactive picture with the corresponding virtual scene in real time to output a target video.
Therefore, the embodiment of the invention provides a video shooting device based on virtual reality, which displays a prop model to a user, then acquires interaction information between the user and the prop model, acquires a corresponding virtual scene based on the interaction information, and then fuses the interaction information and the corresponding virtual scene in real time to output a target video. According to the scheme, the pre-constructed prop model is displayed to the user in a virtual reality mode, so that the degree of engagement among the interactive information of the user, the prop model and the virtual scene is improved, and the shooting efficiency is improved.
The use of the terms "a" and "an" and "the" and similar referents in the context of describing the concepts of the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural. Moreover, unless otherwise indicated herein, recitation of ranges of values herein is merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. In addition, the steps of all methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The present invention is not limited to the order of steps described. The use of any and all examples, or exemplary language (e.g., "such as") provided herein, is intended merely to better illuminate the inventive concept and does not pose a limitation on the scope of the inventive concept unless otherwise claimed. Various modifications and adaptations will be apparent to those skilled in the art without departing from the spirit and scope.
The video shooting method and device based on virtual reality provided by the embodiment of the invention are described in detail above. It should be understood that the exemplary embodiments described herein should be considered merely illustrative for facilitating understanding of the method of the present invention and its core ideas, and not restrictive. Descriptions of features or aspects in each exemplary embodiment should generally be considered as applicable to similar features or aspects in other exemplary embodiments. While the present invention has been described with reference to exemplary embodiments, various changes and modifications may be suggested to one skilled in the art. It is intended that the present invention encompass such changes and modifications as fall within the scope of the appended claims.

Claims (6)

1. A video shooting method based on virtual reality is characterized by comprising the following steps:
establishing a state database of the prop model;
displaying the original state of the prop model to a user;
generating a corresponding response state in the state database based on the limb movement of the user, comprising: acquiring force information generated by limb actions of a user, and angle information and position information of the limb of the user relative to a specific part of the prop model; analyzing the acquired information and processing the information through a related algorithm to obtain display parameter information of the prop model; generating a corresponding state of the prop model based on the display parameter information;
displaying the corresponding state of the prop model to the user;
acquiring interaction information of the user and the prop model, wherein the interaction information comprises: the action information, the expression information, the form information, the language information and the position information are respectively displayed when the user interacts with the prop model;
acquiring a corresponding virtual scene based on the interaction information;
the interactive information is subjected to imaging to obtain an interactive image of the user and the prop model;
and fusing the interactive picture with the corresponding virtual scene in real time to output a target video.
2. The virtual reality-based video shooting method of claim 1, wherein the step of obtaining interaction information of the user and the prop model comprises:
acquiring the body movement, expression and language information of the user;
and reading the corresponding state of the prop model in the state database according to the body action, expression and language information of the user to generate corresponding information.
3. The virtual reality-based video shooting method of claim 1, wherein the step of acquiring the corresponding virtual scene based on the interaction information comprises:
establishing a virtual scene database;
acquiring a virtual position of the prop model in the interactive information;
and generating a corresponding virtual scene in the virtual scene database according to the virtual position.
4. A video shooting device based on virtual reality, comprising:
the display module is used for establishing a state database of the prop model; displaying the original state of the prop model to a user; generating a corresponding response state in the state database based on the limb movement of the user, comprising: acquiring force information generated by limb actions of a user, and angle information and position information of the limb of the user relative to a specific part of the prop model; analyzing the acquired information and processing the information through a related algorithm to obtain display parameter information of the prop model; generating a corresponding state of the prop model based on the display parameter information; displaying the corresponding state of the prop model to the user;
the information acquisition module is used for acquiring interaction information of the user and the prop model, wherein the interaction information comprises: the action information, the expression information, the form information, the language information and the position information are respectively displayed when the user interacts with the prop model;
the scene acquisition module is used for acquiring a corresponding virtual scene based on the interaction information;
the fusion module is used for rendering the interactive information to obtain an interactive picture of the user and the prop model; and fusing the interactive picture with the corresponding virtual scene in real time to output a target video.
5. The virtual reality based video camera of claim 4, wherein the information acquisition module comprises:
the information acquisition unit is used for acquiring the body action, expression and language information of the user;
and the information generation unit is used for reading the corresponding state of the prop model from the state database according to the limb action, expression and language information of the user so as to generate corresponding information.
6. The virtual reality based video camera of claim 4, wherein the scene capture module comprises:
the scene establishing unit is used for establishing a virtual scene database;
the position acquisition unit is used for acquiring the virtual position of the prop model in the interaction information;
and the scene generation unit is used for generating a corresponding virtual scene in the virtual scene database according to the virtual position.
CN201710210901.7A 2017-03-31 2017-03-31 Video shooting method and device based on virtual reality Active CN108668050B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710210901.7A CN108668050B (en) 2017-03-31 2017-03-31 Video shooting method and device based on virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710210901.7A CN108668050B (en) 2017-03-31 2017-03-31 Video shooting method and device based on virtual reality

Publications (2)

Publication Number Publication Date
CN108668050A CN108668050A (en) 2018-10-16
CN108668050B true CN108668050B (en) 2021-04-27

Family

ID=63784579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710210901.7A Active CN108668050B (en) 2017-03-31 2017-03-31 Video shooting method and device based on virtual reality

Country Status (1)

Country Link
CN (1) CN108668050B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112468680A (en) * 2019-09-09 2021-03-09 上海御正文化传播有限公司 Processing method of advertisement shooting site synthesis processing system
CN110673735A (en) * 2019-09-30 2020-01-10 长沙自由视像信息科技有限公司 Holographic virtual human AR interaction display method, device and equipment
CN111192350A (en) * 2019-12-19 2020-05-22 武汉西山艺创文化有限公司 Motion capture system and method based on 5G communication VR helmet
CN111640198A (en) * 2020-06-10 2020-09-08 上海商汤智能科技有限公司 Interactive shooting method and device, electronic equipment and storage medium
CN111760265B (en) * 2020-06-24 2024-03-22 抖音视界有限公司 Operation control method and device
CN111931830B (en) * 2020-07-27 2023-12-29 泰瑞数创科技(北京)股份有限公司 Video fusion processing method and device, electronic equipment and storage medium
CN113327309B (en) * 2021-05-27 2024-04-09 百度在线网络技术(北京)有限公司 Video playing method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105188516A (en) * 2013-03-11 2015-12-23 奇跃公司 System and method for augmented and virtual reality
CN106293082A (en) * 2016-08-05 2017-01-04 成都华域天府数字科技有限公司 A kind of human dissection interactive system based on virtual reality

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130061538A (en) * 2011-12-01 2013-06-11 한국전자통신연구원 Apparatus and method for providing contents based virtual reality
CN103869933B (en) * 2012-12-11 2017-06-27 联想(北京)有限公司 The method and terminal device of information processing
CN104460950A (en) * 2013-09-15 2015-03-25 南京大五教育科技有限公司 Implementation of simulation interactions between users and virtual objects by utilizing virtual reality technology
CN104407701A (en) * 2014-11-27 2015-03-11 曦煌科技(北京)有限公司 Individual-oriented clustering virtual reality interactive system
CN106530880A (en) * 2016-08-31 2017-03-22 徐丽芳 Experiment simulation method based on virtual reality technology
CN106448316A (en) * 2016-08-31 2017-02-22 徐丽芳 Fire training method based on virtual reality technology
CN106502388B (en) * 2016-09-26 2020-06-02 惠州Tcl移动通信有限公司 Interactive motion method and head-mounted intelligent equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105188516A (en) * 2013-03-11 2015-12-23 奇跃公司 System and method for augmented and virtual reality
CN106293082A (en) * 2016-08-05 2017-01-04 成都华域天府数字科技有限公司 A kind of human dissection interactive system based on virtual reality

Also Published As

Publication number Publication date
CN108668050A (en) 2018-10-16

Similar Documents

Publication Publication Date Title
CN108668050B (en) Video shooting method and device based on virtual reality
CN106355153B (en) A kind of virtual objects display methods, device and system based on augmented reality
CN111641844B (en) Live broadcast interaction method and device, live broadcast system and electronic equipment
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
CN106161939B (en) Photo shooting method and terminal
CN111726536A (en) Video generation method and device, storage medium and computer equipment
CN112199016B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114049459A (en) Mobile device, information processing method, and non-transitory computer readable medium
CN112653848B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN106648098B (en) AR projection method and system for user-defined scene
CN107479712B (en) Information processing method and device based on head-mounted display equipment
TW202304212A (en) Live broadcast method, system, computer equipment and computer readable storage medium
CN109840946B (en) Virtual object display method and device
CN112637665B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN113965773A (en) Live broadcast display method and device, storage medium and electronic equipment
CN114187392B (en) Virtual even image generation method and device and electronic equipment
WO2017042070A1 (en) A gazed virtual object identification module, a system for implementing gaze translucency, and a related method
CN111598996A (en) Article 3D model display method and system based on AR technology
CN113066189B (en) Augmented reality equipment and virtual and real object shielding display method
KR102558294B1 (en) Device and method for capturing a dynamic image using technology for generating an image at an arbitray viewpoint
CN111383313B (en) Virtual model rendering method, device, equipment and readable storage medium
CN113706720A (en) Image display method and device
US11961190B2 (en) Content distribution system, content distribution method, and content distribution program
CN114779948B (en) Method, device and equipment for controlling instant interaction of animation characters based on facial recognition
CN113365130A (en) Live broadcast display method, live broadcast video acquisition method and related devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant