CN106303555A - A kind of live broadcasting method based on mixed reality, device and system - Google Patents

A kind of live broadcasting method based on mixed reality, device and system Download PDF

Info

Publication number
CN106303555A
CN106303555A CN201610639734.3A CN201610639734A CN106303555A CN 106303555 A CN106303555 A CN 106303555A CN 201610639734 A CN201610639734 A CN 201610639734A CN 106303555 A CN106303555 A CN 106303555A
Authority
CN
China
Prior art keywords
described
data
according
user
video data
Prior art date
Application number
CN201610639734.3A
Other languages
Chinese (zh)
Other versions
CN106303555B (en
Inventor
周苑龙
秦凯
熊飞
Original Assignee
深圳市豆娱科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市豆娱科技有限公司 filed Critical 深圳市豆娱科技有限公司
Priority to CN201610639734.3A priority Critical patent/CN106303555B/en
Publication of CN106303555A publication Critical patent/CN106303555A/en
Application granted granted Critical
Publication of CN106303555B publication Critical patent/CN106303555B/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements or protocols for real-time communications
    • H04L65/40Services or applications
    • H04L65/4069Services related to one way streaming
    • H04L65/4076Multicast or broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection

Abstract

The invention provides a kind of live broadcasting method based on mixed reality, described method includes: obtain the video data and voice data gathered by on-site data gathering end;According to described video data, generate the three-dimensional scene images mated with described video data;In described three-dimensional scene images, play the video data gathered, and play the described voice data of displaying according to user position in described three-dimensional scenic.The invention enables user can obtain the field data of more horn of plenty such that it is able to create the most live atmosphere.

Description

A kind of live broadcasting method based on mixed reality, device and system

Technical field

The invention belongs to internet arena, particularly relate to a kind of live broadcasting method based on mixed reality, device and system.

Background technology

Along with the development of the network communications technology, the mode of data transmission is more and more diversified.Such as intelligent terminal, Can pass through mobile communications network (3G, 4G etc.), or WIFI network, or cable network carrying out data transmission at a high speed.? While transmission speed development, the mode that video content is live, on the basis of live telecast mode, add network direct broadcasting, And user is watching live content when, it is also possible to increase real-time transmission interaction content, add live interactive effect Really.

Current live broadcasting method, gathers video data, and mike collection site at the scene generally by photographic head Voice data.By transmitting after described voice data and coding video data to user terminal, by user terminal to described The decoding data of coding is play, and enables a user to the terminal plays live data being had network by connection.

Owing to existing live broadcasting method is directly play by intelligent terminal, it is only limitted to the broadcasting of sound, video content, is unfavorable for using Family obtains more rich field data, and is unfavorable for creating the most live atmosphere.

Summary of the invention

It is an object of the invention to provide a kind of live broadcasting method based on mixed reality, device and system, existing to solve The live broadcasting method of technology, is unfavorable for that user obtains more rich field data, and is unfavorable for creating the most live atmosphere Problem.

First aspect, embodiments provides a kind of live broadcasting method based on mixed reality, and described method includes:

Obtain the video data and voice data gathered by on-site data gathering end;

According to described video data, generate the three-dimensional scene images mated with described video data;

The video data gathered is play in described three-dimensional scene images, and according to user in described three-dimensional scenic The described voice data of displaying is play in position.

In conjunction with first aspect, in the first possible implementation of first aspect, described method also includes:

The scene receiving user enters request, according to described request, generates the virtual shape of correspondence in described three-dimensional scenic As;

Gather the behavior state data of user, according to the behavior state data gathered, the described three dimensional field of corresponding control Virtual image in scape performs corresponding action.

In conjunction with the first possible implementation of first aspect, in the possible implementation of the second of first aspect, institute Method of stating also includes:

Other user virtual image in the corresponding position selected and action is shown in described three-dimensional scenic, described Action is corresponding with the behavior state data of other user.

In conjunction with first aspect, in the third possible implementation of first aspect, described at described three-dimensional scene images The video data step that middle broadcasting is gathered includes:

Detect the image-region at personage place in described video data;

Image-region according to described personage place carries out image interception, by truncated picture region at described three-dimensional scenic Image is play.

Second aspect, embodiments provides a kind of live broadcast device based on mixed reality, and described device includes:

Data capture unit, for obtaining the video data and voice data gathered by on-site data gathering end;

Three-dimensional scene images signal generating unit, for according to described video data, generating three mated with described video data Dimension scene image;

Data playback unit, for playing the video data gathered, and according to user in described three-dimensional scene images The described voice data of displaying is play in position in described three-dimensional scenic.

In conjunction with second aspect, in the first possible implementation of second aspect, described device also includes:

Virtual image signal generating unit, enters request, according to described request, in described three dimensional field for receiving the scene of user Scape generates the virtual image of correspondence;

First action control display unit, for gathering the behavior state data of user, according to the behavior state gathered Data, the corresponding virtual image controlled in described three-dimensional scenic performs corresponding action.

In conjunction with the first possible implementation of second aspect, in the possible implementation of the second of second aspect, institute State device also to include:

Second action control display unit, shows that in described three-dimensional scenic other user is in the corresponding position selected Virtual image and action, described action is corresponding with the behavior state data of other user.

In conjunction with second aspect, in the third possible implementation of second aspect, described data playback unit includes:

Image detection sub-unit, for detecting the image-region at the personage place in described video data;

Image interception subelement, for carrying out image interception according to the image-region at described personage place, the figure that will intercept As region is play in described three-dimensional scene images.

The third aspect, embodiments provides a kind of live broadcast system based on mixed reality, described existing based on mixing Real live broadcast system includes behavioral data acquisition module, processor, display module, wherein:

Described behavioral data acquisition module is for gathering the behavior state data of user, and the described behavior state that will gather Data are sent to processor;

Described processor is for receiving the behavior state data that gathered, and receives and gathered by on-site data gathering end Video data and voice data, generate corresponding three-dimensional scene images according to the described video data gathered, in described three dimensional field Scape image plays described video data, and in described three-dimensional scene images, generates the virtual image of user, according to being gathered Behavior state data, control the kinestate of described virtual image;

Described display module is used for showing described three-dimensional scene images.

In conjunction with the third aspect, the third aspect the first may in implementation, described behavioral data acquisition module and Described display module is the virtual helmet of wear-type.

In the present invention, after obtaining the video data and voice data gathered by on-site data gathering end, regard according to described Frequently the three-dimensional scene images that data genaration is corresponding, and in described three-dimensional scene images, play the video data gathered, according to User, in the viewing location of described three-dimensional scenic, the corresponding broadcasting controlling described voice data, enables a user to obtain Take the field data of more horn of plenty such that it is able to create the most live atmosphere.

Accompanying drawing explanation

Fig. 1 is the flowchart of the live broadcasting method based on mixed reality that first embodiment of the invention provides;

Fig. 2 is the flowchart of the live broadcasting method based on mixed reality that second embodiment of the invention provides;

Fig. 3 is the flowchart of the live broadcasting method based on mixed reality that third embodiment of the invention provides;

Fig. 4 is the structural representation of the live broadcast device based on mixed reality that fourth embodiment of the invention provides.

Detailed description of the invention

In order to make the purpose of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, right The present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, and It is not used in the restriction present invention.

The purpose of the embodiment of the present invention is to provide a kind of live on-the-spot ambiance effect more preferably based on mixed reality Live broadcasting method, during to solve to carry out live in prior art, it is common that the voice data directly play and voice data, this broadcasts The mode of putting can not effectively reduce audio frequency and video field data, and user can not obtain live on-the-spot atmosphere.Below in conjunction with attached Figure, the present invention is further illustrated.

Embodiment one:

Fig. 1 shows the flow process that realizes of the live broadcasting method based on mixed reality that first embodiment of the invention provides, and describes in detail As follows:

In step S101, obtain the video data and voice data gathered by on-site data gathering end.

Concrete, on-site data gathering end described in the embodiment of the present invention, can be competition field, concert, TV programme etc. Live equipment used specialty video camera and carry out the mike explained for scene.The video data of described video camera and After the encoded compression of voice data of mike, being sent to direct broadcast server by network, other user terminal can pass through The mode of request, access server, obtain live video data and voice data.

Certainly, described video data can also be shown in by the photographic head being connected with computer, Mike with voice data and adopted The field data of collection, or can also be the field data of the equipment collections such as smart mobile phone.

In embodiments of the present invention, acquired video data is the view data of two dimension, generally, regards described Frequency evidence generally includes main broadcaster's portrait.According to live content, live can classify described, such as can include knowing Know that explanation type is live, performance type is live, race type is live and other TV programme explanation type is live etc..

In step s 102, according to described video data, the three-dimensional scene images mated with described video data is generated.

In embodiments of the present invention, described three-dimensional scene images, can be multiple three-dimensional scenes of prestoring of user Picture, after obtaining the video data gathered by on-site data gathering end, can mate according to described video data.Coupling Method can include the ambient image of video data and the Similarity Measure of three-dimensional scene images, when similarity exceedes certain threshold During value, then it is assumed that the scene data that described video data coupling is corresponding.

Or, described three-dimensional scene images can also be preset the audio frequency characteristics arranging correspondence.When described three-dimensional scene When the similarity of the audio frequency characteristics of picture and the voice data of collection is more than certain value, then for this three-dimensional scenic of Data Matching gathered Image.

Certainly, described scene data can also automatically generate according to the video data gathered.The method generated is permissible According to the image in the video data obtained, in conjunction with 3-D view Core Generator, automatically generate the three-dimensional scene images of correspondence.Or Person can also search corresponding three-dimensional scene images according to user-defined live type.

Such as, it is that performance type is live for live scene, the three-dimensional scene images of concert can be automatically generated, and will Play on the video data giant-screen in concert scene obtained and main platform.Live for knowledge explanation type, Ke Yisheng Becoming classroom scene, video data can be play in position, dais.

In step s 103, described three-dimensional scene images is play the video data gathered, and according to user in institute State the position in three-dimensional scenic and play the described voice data of displaying.

In the three-dimensional scene images generated, can be at the position playing video data pre-set.User can be The three-dimensional scene images generated watches described video data, it is thus achieved that the field data of more horn of plenty, promote showing of video playback Field atmosphere.

Further, present invention additionally comprises user's viewing location in three-dimensional scene images, it is thus achieved that the sound corresponding with this position Frequency evidence.Wherein, user's viewing location in three-dimensional scene images, can distribute according to the request of user.Described scene The described voice data changed, is the sound corresponding with described position.Such as, when the viewing location of user is at the A of position, then By calculating user and the relation of the sound source position in three-dimensional scene images in scene data, control left and right accordingly The sound reproduction time of sound channel, thus the sound effect of simulated field.It is thus possible to further promote the effect of on-the-spot atmosphere.

After the present invention is by obtaining the video data and voice data gathered by on-site data gathering end, according to described video The three-dimensional scene images that data genaration is corresponding, and play the video data that gathered in described three-dimensional scene images, according to Family, in the viewing location of described three-dimensional scenic, the corresponding broadcasting controlling described voice data, enables a user to obtain The field data of more horn of plenty such that it is able to create the most live atmosphere.

Embodiment two:

Fig. 2 shows the flow process that realizes of the live broadcasting method based on mixed reality that second embodiment of the invention provides, and describes in detail As follows:

In step s 201, the video data gathered by on-site data gathering end and voice data are obtained.

In step S202, according to described video data, generate the three-dimensional scene images mated with described video data.

In step S203, the scene receiving user enters request, according to described request, generates in described three-dimensional scenic Corresponding virtual image.

In embodiments of the present invention, after the three-dimensional scene images generating correspondence, can in described three-dimensional scene images To include multiple virtual image position, the user being used to be linked in live broadcast system distributes virtual image.Such as user After access, can check and be currently at the position that can distribute state, after user can select the position oneself liked, activate this position Put corresponding virtual image.

In step S204, gather the behavior state data of user, according to the behavior state data gathered, control accordingly Make the virtual image in described three-dimensional scenic and perform corresponding action.

After the virtual image activating user, it is also possible to the real-time behavior state data gathering user.Described behavior shape The collection of state data, can pass through virtual implementing helmet, or other sensing equipment is acquired.

After the behavior state data of user being detected, control virtual image accordingly according to described behavior state data and hold Action is made.

Such as detect user stretch out one's hand action time, can by the infrared sensor arranged on the virtual helmet, or according to Acceleration transducer, detects the data such as the direction of the motion at the positions such as user's arm, arm, speed and amplitude.According to described Data adjust the corresponding position in virtual image accordingly, perform corresponding action, and the position such as the arm of such as hand, wrist is held The corresponding action of row.

As in the embodiment that the present invention optimizes further, described method may also include that in described three-dimensional scenic aobvious Show other user virtual image in the corresponding position selected and the behavior state number of action, described action and other user According to correspondence.

In described three-dimensional scene images, when other user asks to enter, the correspondence in described three-dimensional scene images Position creates and generates virtual image, and sends to server, the avatar information generated by server by described virtual The information of image sends to checking other live user terminal, and shows institute in listening to other live user terminal State the operating state of virtual image.

In step S205, described three-dimensional scene images is play the video data gathered, and according to user in institute State the position in three-dimensional scenic and play the described voice data of displaying.

The embodiment of the present invention is on the basis of embodiment one, further to adding user's self in three-dimensional scene images Virtual image, or also include simultaneously at the virtual image watching other live user so that user can according to virtual image, The most live effect of enough acquisitions, facilitates user to carry out more preferable interaction.

Embodiment three:

Fig. 3 shows the flow process that realizes of the live broadcasting method based on mixed reality that third embodiment of the invention provides, and describes in detail As follows:

In step S301, obtain the video data and voice data gathered by on-site data gathering end.

In step s 302, according to described video data, the three-dimensional scene images mated with described video data is generated.

In step S303, detect the image-region at personage place in described video data.

Concrete, this step detects for the image-region at the personage place in video data, can be according to predetermined bar Part triggers.Such as when detecting that live type is the live types such as performance type is live, knowledge explanation type is live, then start detection Character image in video data.

Furthermore it is also possible to receive the detection request of user, in video data, carry out image detection according to described request.

In step s 304, image interception is carried out according to the image-region at described personage place, by truncated picture region Described three-dimensional scene images is play, and plays the described audio frequency of displaying according to user position in described three-dimensional scenic Data.

According to person model set in advance, in conjunction with the change information in the character image region in image, can be to video Character image in data carries out detection detection, it is thus achieved that the image-region at personage place, and intercepts this region.

When image-region after intercepting merges with three-dimensional scene images, so that personage is preferably merged effect Really so that watch that live user larger range of can view the image that scene data is combined with figure picture.

Embodiment four:

Fig. 4 shows the structural representation of the live broadcast device based on mixed reality that fourth embodiment of the invention provides, in detail State as follows:

Live broadcast device based on mixed reality described in the embodiment of the present invention, including:

Data capture unit 401, for obtaining the video data and voice data gathered by on-site data gathering end;

Three-dimensional scene images signal generating unit 402, for according to described video data, generation is mated with described video data Three-dimensional scene images;

Data playback unit 403, for playing the video data that gathered in described three-dimensional scene images, and according to The described voice data of displaying is play in position in described three-dimensional scenic, the family.

Preferably, described device also includes:

Virtual image signal generating unit, enters request, according to described request, in described three dimensional field for receiving the scene of user Scape generates the virtual image of correspondence;

First action control display unit, for gathering the behavior state data of user, according to the behavior state gathered Data, the corresponding virtual image controlled in described three-dimensional scenic performs corresponding action.

Preferably, described device also includes:

Second action control display unit, shows that in described three-dimensional scenic other user is in the corresponding position selected Virtual image and action, described action is corresponding with the behavior state data of other user.

Preferably, described data playback unit includes:

Image detection sub-unit, for detecting the image-region at the personage place in described video data;

Image interception subelement, for carrying out image interception according to the image-region at described personage place, the figure that will intercept As region is play in described three-dimensional scene images.

Live broadcast device based on mixed reality described in the embodiment of the present invention, with described in embodiment one to three based on mixed reality Live broadcasting method corresponding, be not repeated at this and repeat.

It addition, the embodiment of the present invention additionally provides a kind of live broadcast system based on mixed reality, described system includes:

Data acquisition module, processor, display module, wherein:

Described behavioral data acquisition module is for gathering the behavior state data of user, and the described behavior state that will gather Data are sent to processor;

Described processor is for receiving the behavior state data that gathered, and receives and gathered by on-site data gathering end Video data and voice data, generate corresponding three-dimensional scene images according to the described video data gathered, in described three dimensional field Scape image plays described video data, and in described three-dimensional scene images, generates the virtual image of user, according to being gathered Behavior state data, control the kinestate of described virtual image;

Described display module is used for showing described three-dimensional scene images.

In conjunction with the third aspect, the third aspect the first may in implementation, described behavioral data acquisition module and Described display module is the virtual helmet of wear-type.Certainly, being not limited to this, described behavioral data acquisition module can also include setting Putting at hand, the acceleration transducer etc. of leg, described display module can also be the equipment such as virtual reality glasses.Described based on The live broadcast system of mixed reality live broadcasting method based on mixed reality with described in embodiment one to three is corresponding.

In several embodiments provided by the present invention, it should be understood that disclosed apparatus and method, can be passed through it Its mode realizes.Such as, device embodiment described above is only schematically, such as, and the division of described unit, only Being only a kind of logic function to divide, actual can have other dividing mode, the most multiple unit or assembly to tie when realizing Close or be desirably integrated into another system, or some features can be ignored, or not performing.Another point, shown or discussed Coupling each other or direct-coupling or communication connection can be the INDIRECT COUPLING by some interfaces, device or unit or logical Letter connects, and can be electrical, machinery or other form.

The described unit illustrated as separating component can be or may not be physically separate, shows as unit The parts shown can be or may not be physical location, i.e. may be located at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected according to the actual needs to realize the mesh of the present embodiment scheme 's.

It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it is also possible to It is that unit is individually physically present, it is also possible to two or more unit are integrated in a unit.Above-mentioned integrated list Unit both can realize to use the form of hardware, it would however also be possible to employ the form of SFU software functional unit realizes.

If described integrated unit realizes and as independent production marketing or use using the form of SFU software functional unit Time, can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially The part that in other words prior art contributed or this technical scheme completely or partially can be with the form of software product Embodying, this computer software product is stored in a storage medium, including some instructions with so that a computer Equipment (can be personal computer, server, or the network equipment etc.) performs the complete of method described in each embodiment of the present invention Portion or part.And aforesaid storage medium includes: USB flash disk, portable hard drive, read only memory (ROM, Read-Only Memory), Random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store program code Medium.

The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all essences in the present invention Any amendment, equivalent and the improvement etc. made within god and principle, should be included within the scope of the present invention.

Claims (10)

1. a live broadcasting method based on mixed reality, it is characterised in that described method includes:
Obtain the video data and voice data gathered by on-site data gathering end;
According to described video data, generate the three-dimensional scene images mated with described video data;
The video data gathered is play in described three-dimensional scene images, and according to user position in described three-dimensional scenic Play the described voice data of displaying.
Method the most according to claim 1, it is characterised in that described method also includes:
The scene receiving user enters request, according to described request, generates the virtual image of correspondence in described three-dimensional scenic;
Gather the behavior state data of user, according to the behavior state data gathered, in the described three-dimensional scenic of corresponding control Virtual image perform corresponding action.
Method the most according to claim 2, it is characterised in that described method also includes:
Other user virtual image in the corresponding position selected and action, described action is shown in described three-dimensional scenic Corresponding with the behavior state data of other user.
Method the most according to claim 1, it is characterised in that described in described three-dimensional scene images play gathered regard Frequently data step includes:
Detect the image-region at personage place in described video data;
Image-region according to described personage place carries out image interception, by truncated picture region at described three-dimensional scene images Middle broadcasting.
5. a live broadcast device based on mixed reality, it is characterised in that described device includes:
Data capture unit, for obtaining the video data and voice data gathered by on-site data gathering end;
Three-dimensional scene images signal generating unit, for according to described video data, generating the three dimensional field mated with described video data Scape image;
Data playback unit, for playing the video data gathered, and according to user in institute in described three-dimensional scene images State the position in three-dimensional scenic and play the described voice data of displaying.
Device the most according to claim 5, it is characterised in that described device also includes:
Virtual image signal generating unit, enters request, according to described request, in described three-dimensional scenic for receiving the scene of user Generate corresponding virtual image;
First action control display unit, for gathering the behavior state data of user, according to the behavior state data gathered, The corresponding virtual image controlled in described three-dimensional scenic performs corresponding action.
Device the most according to claim 6, it is characterised in that described device also includes:
Second action control display unit, shows that in described three-dimensional scenic other user is corresponding position virtual selected Image and action, described action is corresponding with the behavior state data of other user.
Device the most according to claim 5, it is characterised in that described data playback unit includes:
Image detection sub-unit, for detecting the image-region at the personage place in described video data;
Image interception subelement, for carrying out image interception according to the image-region at described personage place, by truncated picture district Territory is play in described three-dimensional scene images.
9. a live broadcast system based on mixed reality, it is characterised in that described live broadcast system based on mixed reality includes row For data acquisition module, processor, display module, wherein:
Described behavioral data acquisition module is for gathering the behavior state data of user, and the described behavior state data that will gather It is sent to processor;
Described processor is for receiving the behavior state data gathered, and receives the video gathered by on-site data gathering end Data and voice data, generate corresponding three-dimensional scene images according to the described video data gathered, at described three-dimensional scene Play described video data in Xiang, and in described three-dimensional scene images, generate the virtual image of user, according to the row gathered For status data, control the kinestate of described virtual image;
Described display module is used for showing described three-dimensional scene images.
System the most according to claim 9, it is characterised in that described behavioral data acquisition module and described display module are The virtual helmet of wear-type.
CN201610639734.3A 2016-08-05 2016-08-05 A kind of live broadcasting method based on mixed reality, device and system CN106303555B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610639734.3A CN106303555B (en) 2016-08-05 2016-08-05 A kind of live broadcasting method based on mixed reality, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610639734.3A CN106303555B (en) 2016-08-05 2016-08-05 A kind of live broadcasting method based on mixed reality, device and system

Publications (2)

Publication Number Publication Date
CN106303555A true CN106303555A (en) 2017-01-04
CN106303555B CN106303555B (en) 2019-12-03

Family

ID=57666044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610639734.3A CN106303555B (en) 2016-08-05 2016-08-05 A kind of live broadcasting method based on mixed reality, device and system

Country Status (1)

Country Link
CN (1) CN106303555B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106937154A (en) * 2017-03-17 2017-07-07 北京蜜枝科技有限公司 Process the method and device of virtual image
CN107590817A (en) * 2017-09-20 2018-01-16 北京奇虎科技有限公司 Image capture device Real-time Data Processing Method and device, computing device
CN107592475A (en) * 2017-09-20 2018-01-16 北京奇虎科技有限公司 Video data handling procedure and device, computing device
CN107613161A (en) * 2017-10-12 2018-01-19 北京奇虎科技有限公司 Video data handling procedure and device, computing device based on virtual world
CN107613360A (en) * 2017-09-20 2018-01-19 北京奇虎科技有限公司 Video data real-time processing method and device, computing device
CN107633228A (en) * 2017-09-20 2018-01-26 北京奇虎科技有限公司 Video data handling procedure and device, computing device
CN107635131A (en) * 2017-09-01 2018-01-26 北京雷石天地电子技术有限公司 A kind of realization method and system of virtual reality
CN107680105A (en) * 2017-10-12 2018-02-09 北京奇虎科技有限公司 Video data real-time processing method and device, computing device based on virtual world
CN107680170A (en) * 2017-10-12 2018-02-09 北京奇虎科技有限公司 View synthesis method and device based on virtual world, computing device
CN107705316A (en) * 2017-09-20 2018-02-16 北京奇虎科技有限公司 Image capture device Real-time Data Processing Method and device, computing device
CN107743263A (en) * 2017-09-20 2018-02-27 北京奇虎科技有限公司 Video data real-time processing method and device, computing device
CN108492363A (en) * 2018-03-26 2018-09-04 广东欧珀移动通信有限公司 Combined method, device, storage medium based on augmented reality and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101032186A (en) * 2004-09-03 2007-09-05 P·津筥 Method and apparatus for producing a phantom three-dimensional sound space with recorded sound
CN101055494A (en) * 2006-04-13 2007-10-17 上海虚拟谷数码科技有限公司 Dummy scene roaming method and system based on spatial index cube panoramic video
CN102737399A (en) * 2012-06-20 2012-10-17 北京水晶石数字科技股份有限公司 Method for roaming ancient painting
CN103460256A (en) * 2011-03-29 2013-12-18 高通股份有限公司 Anchoring virtual images to real world surfaces in augmented reality systems
CN104869524A (en) * 2014-02-26 2015-08-26 腾讯科技(深圳)有限公司 Processing method and device for sound in three-dimensional virtual scene
CN205071236U (en) * 2015-11-02 2016-03-02 徐文波 Wear -type sound video processing equipment
CN105653020A (en) * 2015-07-14 2016-06-08 朱金彪 Time traveling method and apparatus and glasses or helmet using same

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101032186A (en) * 2004-09-03 2007-09-05 P·津筥 Method and apparatus for producing a phantom three-dimensional sound space with recorded sound
CN101055494A (en) * 2006-04-13 2007-10-17 上海虚拟谷数码科技有限公司 Dummy scene roaming method and system based on spatial index cube panoramic video
CN103460256A (en) * 2011-03-29 2013-12-18 高通股份有限公司 Anchoring virtual images to real world surfaces in augmented reality systems
CN102737399A (en) * 2012-06-20 2012-10-17 北京水晶石数字科技股份有限公司 Method for roaming ancient painting
CN104869524A (en) * 2014-02-26 2015-08-26 腾讯科技(深圳)有限公司 Processing method and device for sound in three-dimensional virtual scene
CN105653020A (en) * 2015-07-14 2016-06-08 朱金彪 Time traveling method and apparatus and glasses or helmet using same
CN205071236U (en) * 2015-11-02 2016-03-02 徐文波 Wear -type sound video processing equipment

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106937154A (en) * 2017-03-17 2017-07-07 北京蜜枝科技有限公司 Process the method and device of virtual image
CN107635131A (en) * 2017-09-01 2018-01-26 北京雷石天地电子技术有限公司 A kind of realization method and system of virtual reality
CN107590817A (en) * 2017-09-20 2018-01-16 北京奇虎科技有限公司 Image capture device Real-time Data Processing Method and device, computing device
CN107592475A (en) * 2017-09-20 2018-01-16 北京奇虎科技有限公司 Video data handling procedure and device, computing device
CN107705316A (en) * 2017-09-20 2018-02-16 北京奇虎科技有限公司 Image capture device Real-time Data Processing Method and device, computing device
CN107613360A (en) * 2017-09-20 2018-01-19 北京奇虎科技有限公司 Video data real-time processing method and device, computing device
CN107633228A (en) * 2017-09-20 2018-01-26 北京奇虎科技有限公司 Video data handling procedure and device, computing device
CN107743263A (en) * 2017-09-20 2018-02-27 北京奇虎科技有限公司 Video data real-time processing method and device, computing device
CN107680105A (en) * 2017-10-12 2018-02-09 北京奇虎科技有限公司 Video data real-time processing method and device, computing device based on virtual world
CN107680170A (en) * 2017-10-12 2018-02-09 北京奇虎科技有限公司 View synthesis method and device based on virtual world, computing device
CN107613161A (en) * 2017-10-12 2018-01-19 北京奇虎科技有限公司 Video data handling procedure and device, computing device based on virtual world
CN108492363A (en) * 2018-03-26 2018-09-04 广东欧珀移动通信有限公司 Combined method, device, storage medium based on augmented reality and electronic equipment

Also Published As

Publication number Publication date
CN106303555B (en) 2019-12-03

Similar Documents

Publication Publication Date Title
Orts-Escolano et al. Holoportation: Virtual 3d teleportation in real-time
EP2557782B1 (en) Server system for real-time moving image collection, recognition, classification, processing, and delivery
US8436891B2 (en) Hyperlinked 3D video inserts for interactive television
KR20140037874A (en) Interest-based video streams
US8667519B2 (en) Automatic passive and anonymous feedback system
US9484065B2 (en) Intelligent determination of replays based on event identification
US20030227453A1 (en) Method, system and computer program product for automatically creating an animated 3-D scenario from human position and path data
US9452354B2 (en) Sharing three-dimensional gameplay
JP5222058B2 (en) Dynamic media interaction using time-based metadata
KR101984645B1 (en) Using haptic technologies to provide enhanced media experiences
US9794541B2 (en) Video capture system control using virtual cameras for augmented reality
US9767768B2 (en) Automated object selection and placement for augmented reality
KR101492635B1 (en) Sensory Effect Media Generating and Consuming Method and Apparatus thereof
US9026596B2 (en) Sharing of event media streams
US20070122786A1 (en) Video karaoke system
RU2387013C1 (en) System and method of generating interactive video images
US9363569B1 (en) Virtual reality system including social graph
US20120159527A1 (en) Simulated group interaction with multimedia content
US20120169855A1 (en) System and method for real-sense acquisition
CN102209273A (en) Interactive and shared viewing experience
CN105306468A (en) Method for real-time sharing of synthetic video data and anchor client side
KR101535579B1 (en) Augmented reality interaction implementation method and system
US20120159327A1 (en) Real-time interaction with entertainment content
TW201132122A (en) System and method in a television for providing user-selection of objects in a television program
CN104754396A (en) Curtain popup data display method and device

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20180613

Address after: 518000 Guangdong Shenzhen Nanshan District Nantou Street Nanshan Road 1088 South Garden maple leaf building 10L

Applicant after: Shenzhen morden century science and Technology Co., Ltd.

Address before: 518000, 7 floor, Fuli building, 1 KFA Road, Nanshan street, Nanshan District, Shenzhen, Guangdong.

Applicant before: Shenzhen bean Technology Co., Ltd.

Effective date of registration: 20180613

Address after: 518000 Guangdong Shenzhen Nanshan District Nantou Street Nanshan Road 1088 South Garden maple leaf building 10L

Applicant after: Shenzhen morden century science and Technology Co., Ltd.

Address before: 518000, 7 floor, Fuli building, 1 KFA Road, Nanshan street, Nanshan District, Shenzhen, Guangdong.

Applicant before: Shenzhen bean Technology Co., Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant