CN108668050A - Video capture method and apparatus based on virtual reality - Google Patents

Video capture method and apparatus based on virtual reality Download PDF

Info

Publication number
CN108668050A
CN108668050A CN201710210901.7A CN201710210901A CN108668050A CN 108668050 A CN108668050 A CN 108668050A CN 201710210901 A CN201710210901 A CN 201710210901A CN 108668050 A CN108668050 A CN 108668050A
Authority
CN
China
Prior art keywords
user
model
scene
prop model
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710210901.7A
Other languages
Chinese (zh)
Other versions
CN108668050B (en
Inventor
李炜
胡治国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inlife Handnet Co Ltd
Original Assignee
Inlife Handnet Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inlife Handnet Co Ltd filed Critical Inlife Handnet Co Ltd
Priority to CN201710210901.7A priority Critical patent/CN108668050B/en
Publication of CN108668050A publication Critical patent/CN108668050A/en
Application granted granted Critical
Publication of CN108668050B publication Critical patent/CN108668050B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Abstract

The video capture method and apparatus based on virtual reality that the embodiment of the invention discloses a kind of.The video capture method is by user's show apparature model, then, obtaining the interactive information of user and prop model, and it is based on the interactive information, corresponding virtual scene is obtained, then in real time merges the interactive information with corresponding virtual scene, to export target video.The program shows that the prop model built in advance, the compatible degree between interactive information, prop model and virtual scene three to improve user improve shooting efficiency by way of virtual reality to user.

Description

Video capture method and apparatus based on virtual reality
Technical field
The present invention relates to technical field of virtual reality more particularly to a kind of video capture methods and dress based on virtual reality It sets.
Background technology
VR (Virtual Reality, i.e. virtual reality, also referred to as virtual reality or artificial environment) is a comprehensive integration skill Art is related to the fields such as computer graphics, human-computer interaction technology, sensing technology, artificial intelligence.It generates skill using 3-D graphic Art, more sensing interaction techniques and high-resolution display technology, generate the virtual environment of three dimensional lifelike.As long as user puts on special The sensing equipments such as the helmet, data glove can experience it is true depending on, listen, the feelings such as smell.Or utilize keyboard, mouse etc. Input equipment can enter Virtual Space, become a member of virtual environment, carry out real-time, interactive, perceive and operate virtual generation Various objects in boundary, to obtain impression on the spot in person and cognition.VR is that people carry out complex data by computer Visualized operation and a kind of new way interacted, compared with the window-operating of traditional man-machine interface and prevalence, VR is in skill Art inwardly has qualitative leap.
Green curtain shooting already becomes the part being concerned in video display industry.Traditional green curtain shooting, is using green Curtain is that background carries out performer or the shooting of other main bodys, then to the green curtain background cutout, is finally added in the later stage corresponding Special efficacy synthesizes.However, current shooting gimmick, is easy to cause the space between the special efficacy and the entity of prophase shoot of later stage addition Compatible degree is not high or even the excessive needs of difference are re-shoot, and reduces shooting efficiency, causes the waste of human and material resources resource.
Invention content
The embodiment of the present invention provides a kind of video capture method and apparatus based on virtual reality, can promote the bat of video Take the photograph efficiency.
The video capture method based on virtual reality that an embodiment of the present invention provides a kind of, including:
To user's show apparature model;
Obtain the interactive information of the user and the prop model;
Based on the interactive information, corresponding virtual scene is obtained;
The interactive information is merged with corresponding virtual scene in real time, to export target video.
In some embodiments, include to the step of user's show apparature model:
Establish the slip condition database of prop model;
The reset condition of the prop model is shown to user;
Based on the limb action of the user, corresponding response state is generated in the slip condition database;
The response state of the prop model is shown to the user.
In some embodiments, the step of obtaining the user and the interactive information of the prop model include:
Obtain limb action, expression and the language message of the user;
According to the limb action of the user, expression and language message, the stage property is read in the slip condition database The response state of model, to generate reply information.
In some embodiments, it is based on the interactive information, the step of obtaining corresponding virtual scene includes:
Establish virtual scene database;
Obtain the virtual location where prop model described in interactive information;
According to the virtual location corresponding virtual scene is generated in the virtual scene database.
In some embodiments, the interactive information is merged with corresponding virtual scene in real time, to export target The step of video includes:
By the interactive information picture, to obtain the interactive picture of the user and the prop model;
The interactive picture is merged with corresponding virtual scene in real time, to export target video.
Correspondingly, an embodiment of the present invention provides a kind of video capture devices based on virtual reality, including:
Display module is used for user's show apparature model;
Data obtaining module, the interactive information for obtaining the user and the prop model;
Scene acquisition module obtains corresponding virtual scene for being based on the interactive information;
Fusion Module is regarded in real time merging the interactive information with corresponding virtual scene with exporting target Frequently.
In some embodiments, the display module includes:
State establishes unit, the slip condition database for establishing prop model;
First display unit, the reset condition for showing the prop model to user;
State acquiring unit is used for the limb action based on the user, is generated in the slip condition database corresponding Response state;
Second display unit, the response state for showing the prop model to the user.
In some embodiments, described information acquisition module includes:
Information acquisition unit, limb action, expression and language message for obtaining the user;
Information generating unit, according to the limb action of the user, expression and language message, from the slip condition database The response state for reading the prop model, to generate reply information.
In some embodiments, the scene acquisition module includes:
Scene establishes unit, for establishing virtual scene database;
Position acquisition unit, for obtaining the virtual location where prop model described in interactive information;
Scene generation unit, it is corresponding virtual for being generated in the virtual scene database according to the virtual location Scene.
In some embodiments, the Fusion Module includes:
Converting unit is used for by the interactive information picture, to obtain the interaction of the user and the prop model Picture;
Integrated unit is regarded in real time merging the interactive picture with corresponding virtual scene with exporting target Frequently.
Video capture method provided in an embodiment of the present invention based on virtual reality, by user's show apparature model, Then, the interactive information of user and prop model are obtained, and is based on the interactive information, obtains corresponding virtual scene, then in real time The interactive information is merged with corresponding virtual scene, to export target video.The program is by way of virtual reality The prop model that builds in advance is shown to user, between the interactive information, prop model and virtual scene three to improve user Compatible degree, improve shooting efficiency.
Description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 is a kind of flow diagram of the video capture method provided in an embodiment of the present invention based on virtual reality.
Fig. 2 is another flow diagram of the video capture method provided in an embodiment of the present invention based on virtual reality.
Fig. 3 is a kind of application scenarios signal of the video capture system provided in an embodiment of the present invention based on virtual reality Figure.
Fig. 4 is a kind of structural schematic diagram of the video capture device provided in an embodiment of the present invention based on virtual reality.
Fig. 5 is another structural schematic diagram of the video capture device provided in an embodiment of the present invention based on virtual reality.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation describes.Obviously, described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, the every other implementation that those skilled in the art are obtained without creative efforts Example, shall fall within the protection scope of the present invention.
Term " first ", " second ", " third " in description and claims of this specification and above-mentioned attached drawing etc. (if present) is for distinguishing similar object, without being used to describe specific sequence or precedence.It should be appreciated that this The object of sample description can be interchanged in the appropriate case.In addition, term " comprising " and " having " and their any deformation, meaning Figure, which is to cover, non-exclusive includes.
In patent document, the attached drawing that is discussed herein below and for describing each embodiment of principle disclosed by the invention only For illustrating, and should not be construed as limiting the scope of the present disclosure.Those skilled in the art will appreciate that original of the invention Reason can be implemented in any system suitably arranged.Illustrative embodiments are will be explained in, these realities are shown in the accompanying drawings Apply the example of mode.In addition, terminal accoding to exemplary embodiment will be described in detail with reference to the attached drawings.Identical attached drawing mark in attached drawing Number refer to identical element.
The term used in description of the invention is only used for describing particular implementation, and is not intended the display present invention's Concept.Unless have clearly different meanings in context, it is otherwise, used in the singular to express the table for covering plural form It reaches.In the description of the present invention, it should be appreciated that there are this hairs for the terms meant for illustration such as " comprising ", " having " and " containing " The possibility of the feature that is disclosed in bright specification, number, step, action or combinations thereof, and be not intended to exclude may be present or can The possibility of other one or more features of addition, number, step, action or combinations thereof.Same reference numerals in attached drawing refer to For same section.
The embodiment of the present invention provides a kind of video capture method and apparatus based on virtual reality.It will carry out respectively below detailed It describes in detail bright.
In a preferred embodiment, a kind of video capture method based on virtual reality is provided, as shown in Figure 1, flow can With as follows:
101, to user's show apparature model.
Specifically, which can be the virtual item threedimensional model built in advance, for example, can be in science fiction film Strange beast threedimensional model, weapon threedimensional model etc..
Wherein, which can be stored in terminal or the corresponding storage region of server.
In specific implementation process, user can be allowed to wear virtual reality glasses, prop model is showed into user.
102, the interactive information of the user and the prop model are obtained.
In embodiments of the present invention, when interactive information refers to user and interactive prop model progress, what is respectively shown is dynamic Make information, expression information, shape information, language message, respective position information and other information etc..
103, it is based on the interactive information, obtains corresponding virtual scene.
In some embodiments, information needed can be extracted from the interactive information, is obtained according to the information needed Corresponding virtual scene.For example, the mapping relations between interactive information and virtual scene can be pre-established, by interactive information, Virtual scene and the mapping relations preserve, and obtain mapping relations set.Then, according to target interactive information and mapping relations, Destination virtual scene corresponding with the target interactive information is obtained from the mapping relations set.
104, the interactive information is merged with corresponding virtual scene in real time, to export target video.
In practical application, interactive information and virtual scene real-time matching can be obtained multiframe 3-D view, then incited somebody to action The multiframe three-dimensional image making arrived is merged at video file, by interactive information with corresponding virtual scene, obtains target Video realizes video capture.
From the foregoing, it will be observed that an embodiment of the present invention provides a kind of video capture method based on virtual reality, by user Then show apparature model obtains the interactive information of user and prop model, and be based on the interactive information, obtains corresponding void Quasi- scene, then in real time merge the interactive information with corresponding virtual scene, to export target video.The program passes through void The mode for intending reality shows the prop model that builds in advance to user, with improve the interactive information of user, prop model, with it is virtual Compatible degree between scene three, improves shooting efficiency.
In still another embodiment of the process, another video capture method based on virtual reality is also provided.As shown in Fig. 2, Flow can be as follows:
201, the slip condition database of prop model is established.
In the present embodiment, which can be stored in terminal device or server.In specific implementation process, it is The reading speed for promoting data, facilitates the calling of data, can the slip condition database be stored in terminal local.
Wherein, which can be the virtual item threedimensional model built in advance, for example, can be in science fiction film Strange beast threedimensional model, weapon threedimensional model etc..
202, to the reset condition of user's show apparature model.
Specifically, reset condition namely initial model state.Prop model is refered in particular in the present embodiment in no receive to appoint In the case of what operational order, the stable state set by model designer.
203, the limb action based on user generates corresponding response state in slip condition database.
Specifically, produced by the limb action (action for including leg, hand, head, body etc.) that user can be obtained Dynamics information and user relative to prop model position information, the limbs of user are relative to prop model particular portion Angle information, the location information etc. of position.Then, the information of acquisition is parsed, obtains corresponding various parameters information.Again will After parameter information is handled by related algorithm, obtain being directed to the corresponding various parameters information of prop model, and according to obtained ginseng Number information generates corresponding response state, is stored in slip condition database.
In practical application, the limb action of user can be obtained by a series of virtual reality device.For example, can lead to The equipment such as data suit, data glove, the data bracelet of user's wearing are crossed, the various parameters of user's body are obtained, further according to these Parameter determines the limb action of user.
204, to the response state of user's show apparature model.
According to the limb action of user, in real time to the response state of its show apparature model.For example, user's left hand is from face The direction on prop model head is launched an attack a fist, and prop model correspondingly steps back several steps and falls down to the ground.
205, limb action, expression and the language message of user are obtained.
Specifically, limb action, expression and the language of user can be obtained during user and prop model interaction Information etc..
206, from slip condition database, the response state of prop model is read, to generate reply information.
In some embodiments, the response state of the prop model correspondingly can also there are many, for example, may include Limb action, expression and language message etc..
In specific implementation process, it can be obtained according to one or more in the limb action of user, expression and language message Take the limb action of prop model.According to one or more in the limb action of user, expression and language message, acquisition stage property The expression of model.According to one or more in the limb action of user, expression and language message, the language of prop model is obtained Information.According to the response state of acquisition, corresponding reply information is generated.
207, based on the interactive information between user and prop model, corresponding virtual scene is obtained.
In the present embodiment, which may include limb action, expression and the language message and stage property of user The reply information of model.In the present embodiment, obtain virtual scene mode can there are many.It optionally, can be according in Virtual Space The location information of prop model determines corresponding virtual scene.Namely step is " based on the interaction between user and prop model Information obtains corresponding virtual scene " may include following flow:
Establish virtual scene database;
Obtain the virtual location where prop model in interactive information;
Corresponding virtual scene is generated in virtual scene database according to virtual location.
Specifically, which can establish in the storage region of terminal device, so as to the calling of data, Promote the reading speed of data.
For example, when prop model moves towards the rightmost side by the scene leftmost side, scene will be with stage property mould on the left of virtual scene The movement locus of type disappears, and the scene on newly-increased right side.
208, by interactive information picture, to obtain the interactive picture of user and prop model.
It specifically, can be on the basis of the time, by the limb action of user, expression and language message, with answering for prop model Matching synthesis processing is carried out to information, to obtain the interactive graphics between a frame frame user changed over time and prop model Picture.
209, interactive picture is merged with corresponding virtual scene in real time, to export target video.
It is another similarly in the present invention, on the basis of the time, acquisition sometime put corresponding user and prop model it Between interactive 3-D view and the time point corresponding virtual scene.Then, by the interaction 3-D view and virtual scene into Row image synthesis is handled, and obtains new 3-D view.And so on, the 3-D view of each point is synthesized, to obtain interactive letter The multiframe 3-D view of breath and virtual scene.Then obtained multiframe 3-D view is fabricated to video file in chronological order, To merge interactive information with corresponding virtual scene, target video is obtained, realizes video capture.
From the foregoing, it will be observed that an embodiment of the present invention provides a kind of video capture method based on virtual reality, by establishing The slip condition database of tool model is then based on the limb action of user, in state to the reset condition of user's show apparature model Generate corresponding response state in database, and to the response state of user's show apparature model.In turn, the limbs of user are obtained Action, expression and language message, and from slip condition database, the response state of prop model is read, to generate reply information. Corresponding virtual scene is generated in virtual scene database further according to the virtual location of prop model, finally by the limbs of user Action, expression and language message, prop model are corresponding to be merged to information and corresponding virtual scene, to export mesh Video is marked, realizes video capture.The program shows the prop model built in advance, a side by way of virtual reality to user Face so that user can carry out interaction in Virtual Space with the prop model, improve the authenticity of video capture;Another party Face first makes special efficacy, can improve the compatible degree between the interactive information, prop model and virtual scene three of user, avoid Space fit degree in later stage green screen making between the special efficacy added and the entity of prophase shoot is not high asking of need to re-shooting Topic, improves shooting efficiency, saves human and material resources resource.
With reference to figure 3, further embodiment of this invention provides a kind of video capture system based on virtual reality.Such as figure, this is regarded Frequency camera system includes:Photographic equipment 33, server 34 and control show equipment 36.
Wherein, photographic equipment 33 can be video camera, camera or the electronic equipment with camera function, can be used for Acquire image information.For example, the photographic equipment can be used for limb action, expression and language message of user etc..
Server 34 is specifically as follows the network equipments such as data server, network server.The server 34 can be used for carrying For prop model 341 and the 3-D view of virtual scene 342.
Display control apparatus 36 may include the intelligence that computer, smart mobile phone, tablet computer etc. have operation processing function It can equipment.
In another embodiment in embodiment, the another video capture method based on virtual reality is provided.It will be based below Above-mentioned video capture system retouches the video capture method so that the prop model 341 is strange beast model 341 as an example in detail It states.
In the present embodiment, user 31 is by dressing virtual reality device 32 (such as data glove, data suit, virtual reality Glasses, data shoes and data bracelet etc.) enter photographed scene.Then, show equipment (as large-scale projection is by 3D vision Unite CAVE) to user 31 show the server 34 provide strange beast model 341 reset condition.User 31 passes through virtual reality eye Mirror sees strange beast model 341, then, interaction is carried out with strange beast model 341.User's limb is obtained by data suit, data glove etc. Body acts, and corresponding parameter information is sent to server 34, and it is corresponding virtual that server 34 is based on parameter information calling The response state of scene 342 and strange beast model 341 is shown.
As shown in figure 3, user 31 does the action of weapon delivery to strange beast model 341, server 34 will be according to user's 31 Limb action generates corresponding light wave (i.e. virtual scene 342).Strange beast model 341 by the action based on 31 weapon delivery of user, Show the state gone down.Photographic equipment 33 captures the information such as limb action, expression, the language of user 31, real-time Transmission It is shown to display control terminal 36, limb action of the server 34 based on user 31, in real time by virtual scene 342 and monster Beast model 341 is transmitted to display control terminal 36 and is shown, with the image 35 after being merged.
In practical application, in order to promote the authenticity of scene, can also utilize sound device (as three-dimensional audio system with And non-traditional meaning is stereo) it is that virtual special efficacy and strange beast model 341 configure corresponding audio.
From the foregoing, it will be observed that the video capture method provided in an embodiment of the present invention based on virtual reality, passes through virtual reality Mode shows the prop model built in advance to user, to improve the interactive information, prop model and virtual scene three of user Between compatible degree, improve shooting efficiency.
In still another embodiment of the process, a kind of video capture device based on virtual reality is also provided.As shown in figure 4, The video capture device based on virtual reality may include display module 41, data obtaining module 42, scene acquisition module 43 And Fusion Module 44, wherein:
Display module 41 is used for user's show apparature model;
Data obtaining module 42, the interactive information for obtaining the user and the prop model;
Scene acquisition module 43 obtains corresponding virtual scene for being based on the interactive information;
Fusion Module 44 is regarded in real time merging the interactive information with corresponding virtual scene with exporting target Frequently.
With reference to figure 5, in some embodiments, which may include that state establishes the displaying list of unit 411, first Member 412, state acquiring unit 413 and the second display unit 414, wherein:
State establishes unit 411, the slip condition database for establishing prop model;
First display unit 412, the reset condition for showing the prop model to user;
State acquiring unit 413 is used for the limb action based on the user, corresponding answer is generated in the slip condition database To state;
Second display unit 414, the response state for showing the prop model to the user.
With continued reference to Fig. 5, in some embodiments, data obtaining module 42 may include information acquisition unit 421 and letter Generation unit 422 is ceased, wherein:
Information acquisition unit 421, limb action, expression and language message for obtaining the user;
Information generating unit 422, for from the slip condition database, reading the response state of the prop model, to generate Cope with information.
With continued reference to Fig. 5, in some embodiments, scene acquisition module 43 may include that scene establishes unit 431, position Acquiring unit 432 and scene generation unit 433, wherein:
Scene establishes unit 431, for establishing virtual scene database;
Position acquisition unit 432, for obtaining the virtual location in interactive information where the prop model;
Scene generation unit 433, for generating corresponding virtual scene in virtual scene database according to virtual location.
With continued reference to Fig. 5, in some embodiments, which may include converting unit 441 and integrated unit 442, wherein:
Converting unit 441 is used for by the interactive information picture, to obtain the mutual animation of the user and the prop model Face;
Integrated unit 442 is regarded in real time merging the interactive picture with corresponding virtual scene with exporting target Frequently.
From the foregoing, it will be observed that an embodiment of the present invention provides a kind of video capture device based on virtual reality, by user Then show apparature model obtains the interactive information of user and prop model, and be based on the interactive information, obtains corresponding void Quasi- scene, then in real time merge the interactive information with corresponding virtual scene, to export target video.The program passes through void The mode for intending reality shows the prop model that builds in advance to user, with improve the interactive information of user, prop model, with it is virtual Compatible degree between scene three, improves shooting efficiency.
Term " one " and " described " and similar word have been used during describing idea of the invention (especially In the appended claims), it should be construed to not only cover odd number by these terms but also cover plural number.In addition, unless herein In be otherwise noted, otherwise herein narration numberical range when merely by quick method belong to the every of relevant range to refer to A independent value, and each independent value is incorporated into this specification, just as these values have individually carried out statement one herein Sample.In addition, unless otherwise stated herein or context has specific opposite prompt, otherwise institute described herein is methodical Step can be executed by any appropriate order.The change of the present invention is not limited to the step of description sequence.Unless in addition Advocate, otherwise uses any and all example or exemplary language presented herein (for example, " such as ") to be all only Idea of the invention is better described, and not the range of idea of the invention limited.Spirit and model are not being departed from In the case of enclosing, those skilled in the art becomes readily apparent that a variety of modifications and adaptation.
A kind of video capture method and apparatus based on virtual reality are provided for the embodiments of the invention above to carry out It is discussed in detail.It should be understood that illustrative embodiments as described herein should be to be considered only as descriptive, it is used to help understand this hair Bright method and its core concept, and be not intended to restrict the invention.To features or aspect in each illustrative embodiments Description should usually be considered the similar features or aspects suitable for other exemplary embodiments.Although reference example is implemented Example describes the present invention, but can suggest that those skilled in the art carries out various change and change.The invention is intended to cover These variations in the range of attached claims and change.

Claims (10)

1. a kind of video capture method based on virtual reality, which is characterized in that including:
To user's show apparature model;
Obtain the interactive information of the user and the prop model;
Based on the interactive information, corresponding virtual scene is obtained;
The interactive information is merged with corresponding virtual scene in real time, to export target video.
2. the video capture method based on virtual reality as described in claim 1, which is characterized in that user's show apparature mould The step of type includes:
Establish the slip condition database of prop model;
The reset condition of the prop model is shown to user;
Based on the limb action of the user, corresponding response state is generated in the slip condition database;
The response state of the prop model is shown to the user.
3. the video capture method based on virtual reality as claimed in claim 2, which is characterized in that obtain the user and institute The step of interactive information for stating prop model includes:
Obtain limb action, expression and the language message of the user;
According to the limb action of the user, expression and language message, the prop model is read in the slip condition database Response state, to generate reply information.
4. the video capture method based on virtual reality as described in claim 1, which is characterized in that based on the interactive letter Breath, the step of obtaining corresponding virtual scene include:
Establish virtual scene database;
Obtain the virtual location where prop model described in interactive information;
According to the virtual location corresponding virtual scene is generated in the virtual scene database.
5. the video capture method based on virtual reality as described in claim 1, which is characterized in that in real time by the interactive letter Breath is merged with corresponding virtual scene, and to export target video the step of includes:
By the interactive information picture, to obtain the interactive picture of the user and the prop model;
The interactive picture is merged with corresponding virtual scene in real time, to export target video.
6. a kind of video capture device based on virtual reality, which is characterized in that including:
Display module is used for user's show apparature model;
Data obtaining module, the interactive information for obtaining the user and the prop model;
Scene acquisition module obtains corresponding virtual scene for being based on the interactive information;
Fusion Module, in real time merging the interactive information with corresponding virtual scene.
7. the video capture device based on virtual reality as claimed in claim 6, which is characterized in that the display module packet It includes:
State establishes unit, the slip condition database for establishing prop model;
First display unit, the reset condition for showing the prop model to user;
State acquiring unit is used for the limb action based on the user, corresponding reply is generated in the slip condition database State;
Second display unit, the response state for showing the prop model to the user.
8. the video capture device based on virtual reality as claimed in claim 7, which is characterized in that described information acquisition module Including:
Information acquisition unit, limb action, expression and language message for obtaining the user;
Information generating unit is read according to the limb action of the user, expression and language message from the slip condition database The response state of the prop model, to generate reply information.
9. the video capture device based on virtual reality as claimed in claim 6, which is characterized in that the scene acquisition module Including:
Scene establishes unit, for establishing virtual scene database;
Position acquisition unit, for obtaining the virtual location where prop model described in interactive information;
Scene generation unit, for generating corresponding virtual field in the virtual scene database according to the virtual location Scape.
10. the video capture device based on virtual reality as claimed in claim 6, which is characterized in that the Fusion Module packet It includes:
Converting unit is used for by the interactive information picture, to obtain the interactive picture of the user and the prop model;
Integrated unit, in real time merging the interactive picture with corresponding virtual scene, to export target video.
CN201710210901.7A 2017-03-31 2017-03-31 Video shooting method and device based on virtual reality Active CN108668050B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710210901.7A CN108668050B (en) 2017-03-31 2017-03-31 Video shooting method and device based on virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710210901.7A CN108668050B (en) 2017-03-31 2017-03-31 Video shooting method and device based on virtual reality

Publications (2)

Publication Number Publication Date
CN108668050A true CN108668050A (en) 2018-10-16
CN108668050B CN108668050B (en) 2021-04-27

Family

ID=63784579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710210901.7A Active CN108668050B (en) 2017-03-31 2017-03-31 Video shooting method and device based on virtual reality

Country Status (1)

Country Link
CN (1) CN108668050B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110673735A (en) * 2019-09-30 2020-01-10 长沙自由视像信息科技有限公司 Holographic virtual human AR interaction display method, device and equipment
CN111192350A (en) * 2019-12-19 2020-05-22 武汉西山艺创文化有限公司 Motion capture system and method based on 5G communication VR helmet
CN111640198A (en) * 2020-06-10 2020-09-08 上海商汤智能科技有限公司 Interactive shooting method and device, electronic equipment and storage medium
CN111931830A (en) * 2020-07-27 2020-11-13 泰瑞数创科技(北京)有限公司 Video fusion processing method and device, electronic equipment and storage medium
CN112468680A (en) * 2019-09-09 2021-03-09 上海御正文化传播有限公司 Processing method of advertisement shooting site synthesis processing system
CN113327309A (en) * 2021-05-27 2021-08-31 百度在线网络技术(北京)有限公司 Video playing method and device
WO2021258978A1 (en) * 2020-06-24 2021-12-30 北京字节跳动网络技术有限公司 Operation control method and apparatus

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130061538A (en) * 2011-12-01 2013-06-11 한국전자통신연구원 Apparatus and method for providing contents based virtual reality
CN103869933A (en) * 2012-12-11 2014-06-18 联想(北京)有限公司 Information processing method and terminal equipment
CN104407701A (en) * 2014-11-27 2015-03-11 曦煌科技(北京)有限公司 Individual-oriented clustering virtual reality interactive system
CN104460950A (en) * 2013-09-15 2015-03-25 南京大五教育科技有限公司 Implementation of simulation interactions between users and virtual objects by utilizing virtual reality technology
CN105188516A (en) * 2013-03-11 2015-12-23 奇跃公司 System and method for augmented and virtual reality
CN106293082A (en) * 2016-08-05 2017-01-04 成都华域天府数字科技有限公司 A kind of human dissection interactive system based on virtual reality
CN106448316A (en) * 2016-08-31 2017-02-22 徐丽芳 Fire training method based on virtual reality technology
CN106502388A (en) * 2016-09-26 2017-03-15 惠州Tcl移动通信有限公司 A kind of interactive movement technique and head-wearing type intelligent equipment
CN106530880A (en) * 2016-08-31 2017-03-22 徐丽芳 Experiment simulation method based on virtual reality technology

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130061538A (en) * 2011-12-01 2013-06-11 한국전자통신연구원 Apparatus and method for providing contents based virtual reality
CN103869933A (en) * 2012-12-11 2014-06-18 联想(北京)有限公司 Information processing method and terminal equipment
CN105188516A (en) * 2013-03-11 2015-12-23 奇跃公司 System and method for augmented and virtual reality
CN104460950A (en) * 2013-09-15 2015-03-25 南京大五教育科技有限公司 Implementation of simulation interactions between users and virtual objects by utilizing virtual reality technology
CN104407701A (en) * 2014-11-27 2015-03-11 曦煌科技(北京)有限公司 Individual-oriented clustering virtual reality interactive system
CN106293082A (en) * 2016-08-05 2017-01-04 成都华域天府数字科技有限公司 A kind of human dissection interactive system based on virtual reality
CN106448316A (en) * 2016-08-31 2017-02-22 徐丽芳 Fire training method based on virtual reality technology
CN106530880A (en) * 2016-08-31 2017-03-22 徐丽芳 Experiment simulation method based on virtual reality technology
CN106502388A (en) * 2016-09-26 2017-03-15 惠州Tcl移动通信有限公司 A kind of interactive movement technique and head-wearing type intelligent equipment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112468680A (en) * 2019-09-09 2021-03-09 上海御正文化传播有限公司 Processing method of advertisement shooting site synthesis processing system
CN110673735A (en) * 2019-09-30 2020-01-10 长沙自由视像信息科技有限公司 Holographic virtual human AR interaction display method, device and equipment
CN111192350A (en) * 2019-12-19 2020-05-22 武汉西山艺创文化有限公司 Motion capture system and method based on 5G communication VR helmet
CN111640198A (en) * 2020-06-10 2020-09-08 上海商汤智能科技有限公司 Interactive shooting method and device, electronic equipment and storage medium
WO2021258978A1 (en) * 2020-06-24 2021-12-30 北京字节跳动网络技术有限公司 Operation control method and apparatus
CN111931830A (en) * 2020-07-27 2020-11-13 泰瑞数创科技(北京)有限公司 Video fusion processing method and device, electronic equipment and storage medium
CN111931830B (en) * 2020-07-27 2023-12-29 泰瑞数创科技(北京)股份有限公司 Video fusion processing method and device, electronic equipment and storage medium
CN113327309A (en) * 2021-05-27 2021-08-31 百度在线网络技术(北京)有限公司 Video playing method and device
CN113327309B (en) * 2021-05-27 2024-04-09 百度在线网络技术(北京)有限公司 Video playing method and device

Also Published As

Publication number Publication date
CN108668050B (en) 2021-04-27

Similar Documents

Publication Publication Date Title
CN108668050A (en) Video capture method and apparatus based on virtual reality
US11887234B2 (en) Avatar display device, avatar generating device, and program
CN108447043B (en) Image synthesis method, equipment and computer readable medium
KR100912877B1 (en) A mobile communication terminal having a function of the creating 3d avata model and the method thereof
CN113240782A (en) Streaming media generation method and device based on virtual role
CN111080759B (en) Method and device for realizing split mirror effect and related product
KR102491140B1 (en) Method and apparatus for generating virtual avatar
CN110418095B (en) Virtual scene processing method and device, electronic equipment and storage medium
CN110401810B (en) Virtual picture processing method, device and system, electronic equipment and storage medium
CN111541950B (en) Expression generating method and device, electronic equipment and storage medium
CN108525305A (en) Image processing method, device, storage medium and electronic equipment
CN110555507B (en) Interaction method and device for virtual robot, electronic equipment and storage medium
CN106161939A (en) A kind of method, photo taking and terminal
CN106657060A (en) VR communication method and system based on reality scene
CN113362263A (en) Method, apparatus, medium, and program product for changing the image of a virtual idol
CN109523615B (en) Data processing method and device for virtual animation character actions
US10955911B2 (en) Gazed virtual object identification module, a system for implementing gaze translucency, and a related method
CN110536095A (en) Call method, device, terminal and storage medium
CN108259806A (en) A kind of video communication method, equipment and terminal
CN113453027B (en) Live video and virtual make-up image processing method and device and electronic equipment
CN110503707A (en) A kind of true man's motion capture real-time animation system and method
CN111383313B (en) Virtual model rendering method, device, equipment and readable storage medium
CN113176827B (en) AR interaction method and system based on expressions, electronic device and storage medium
CN108122273A (en) A kind of number animation generation system and method
JP7432270B1 (en) Information processing system, information processing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant