CN106303555B - A kind of live broadcasting method based on mixed reality, device and system - Google Patents

A kind of live broadcasting method based on mixed reality, device and system Download PDF

Info

Publication number
CN106303555B
CN106303555B CN201610639734.3A CN201610639734A CN106303555B CN 106303555 B CN106303555 B CN 106303555B CN 201610639734 A CN201610639734 A CN 201610639734A CN 106303555 B CN106303555 B CN 106303555B
Authority
CN
China
Prior art keywords
data
according
image
video data
dimensional
Prior art date
Application number
CN201610639734.3A
Other languages
Chinese (zh)
Other versions
CN106303555A (en
Inventor
周苑龙
秦凯
熊飞
Original Assignee
深圳市摩登世纪科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市摩登世纪科技有限公司 filed Critical 深圳市摩登世纪科技有限公司
Priority to CN201610639734.3A priority Critical patent/CN106303555B/en
Publication of CN106303555A publication Critical patent/CN106303555A/en
Application granted granted Critical
Publication of CN106303555B publication Critical patent/CN106303555B/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements or protocols for real-time communications
    • H04L65/40Services or applications
    • H04L65/4069Services related to one way streaming
    • H04L65/4076Multicast or broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection

Abstract

The present invention provides a kind of live broadcasting methods based on mixed reality, which comprises obtains the video data and audio data acquired by on-site data gathering end;According to the video data, generate and the matched three-dimensional scene images of the video data;Video data collected is played in the three-dimensional scene images, and the audio data of displaying is played according to position of the user in the three-dimensional scenic.The invention enables users can obtain more abundant field data, and atmosphere is more preferably broadcast live so as to create.

Description

A kind of live broadcasting method based on mixed reality, device and system

Technical field

The invention belongs to internet area more particularly to a kind of live broadcasting methods based on mixed reality, device and system.

Background technique

With the continuous development of network communication technology, the mode of data transmission is also more and more diversified.Such as intelligent terminal, It can perhaps WIFI network or cable network high speed carry out data transmission by mobile communications network (3G, 4G etc.).In While transmission speed develops, the mode of video content live streaming increases network direct broadcasting on the basis of live telecast mode, And user can also increase when watching live content and send interaction content in real time, increase the interaction effect of live streaming Fruit.

Current live broadcasting method acquires video data and microphone collection site generally by camera at the scene Audio data.The audio data and video data are subjected to coded transmission to user terminal, by user terminal to described The data of coding are decoded broadcasting, enable a user to the terminal plays live data by being connected with network.

Since existing live broadcasting method is directly played by intelligent terminal, it is only limitted to the broadcasting of sound, video content, is unfavorable for using Family obtains richer field data, and is unfavorable for creating and atmosphere is more preferably broadcast live.

Summary of the invention

It is existing to solve the purpose of the present invention is to provide a kind of live broadcasting method based on mixed reality, device and system The live broadcasting method of technology is unfavorable for user and obtains richer field data, and is unfavorable for creating and atmosphere is more preferably broadcast live Problem.

In a first aspect, the embodiment of the invention provides a kind of live broadcasting methods based on mixed reality, which comprises

Obtain the video data and audio data acquired by on-site data gathering end;

According to the video data, generate and the matched three-dimensional scene images of the video data;

Video data collected is played in the three-dimensional scene images, and according to user in the three-dimensional scenic The audio data of position broadcasting displaying.

With reference to first aspect, in the first possible implementation of first aspect, the method also includes:

The scene for receiving user enters request, and according to the request, corresponding virtual shape is generated in the three-dimensional scenic As;

The behavior state data of acquisition user control the three dimensional field according to behavior state data collected accordingly Virtual image in scape executes corresponding movement.

The possible implementation of with reference to first aspect the first, in second of possible implementation of first aspect, institute State method further include:

Virtual image and movement of other users in the corresponding position of selection are shown in the three-dimensional scenic, it is described It acts corresponding with the behavior state data of other users.

With reference to first aspect, described in the three-dimensional scene images in the third possible implementation of first aspect It is middle to play video data step collected and include:

Detect the image-region where the personage in the video data;

Image-region where the personage carries out image interception, by truncated picture region in the three-dimensional scenic It is played in image.

Second aspect, the embodiment of the invention provides a kind of live broadcast device based on mixed reality, described device includes:

Data capture unit, for obtaining the video data and audio data that are acquired by on-site data gathering end;

Three-dimensional scene images generation unit, for generating and the video data matched three according to the video data Tie up scene image;

Data playback unit, for playing video data collected in the three-dimensional scene images, and according to user Position in the three-dimensional scenic plays the audio data of displaying.

In conjunction with second aspect, in the first possible implementation of second aspect, described device further include:

Virtual image generation unit, the scene for receiving user enter request, according to the request, in the three dimensional field Corresponding virtual image is generated in scape;

First action control display unit, for acquiring the behavior state data of user, according to behavior state collected Data control the virtual image in the three-dimensional scenic accordingly and execute corresponding movement.

In conjunction with the first possible implementation of second aspect, in second of possible implementation of second aspect, institute State device further include:

Second action control display unit shows other users in the corresponding position of selection in the three-dimensional scenic Virtual image and movement, the movement are corresponding with the behavior state data of other users.

In conjunction with second aspect, in the third possible implementation of second aspect, the data playback unit includes:

Image detection subelement, for detecting the image-region where the personage in the video data;

Image interception subelement, for carrying out image interception according to the image-region where the personage, by the figure of interception As region plays in the three-dimensional scene images.

The third aspect, it is described existing based on mixing the embodiment of the invention provides a kind of live broadcast system based on mixed reality Real live broadcast system includes behavioral data acquisition module, processor, display module, in which:

The behavioral data acquisition module is used to acquire the behavior state data of user, and by the behavior state of acquisition Data are sent to processor;

The processor is for receiving behavior state data collected, and receives and acquired by on-site data gathering end Video data and audio data generate corresponding three-dimensional scene images according to the video data of acquisition, in the three dimensional field The video data is played in scape image, and generates the virtual image of user in the three-dimensional scene images, according to being acquired Behavior state data, control the motion state of the virtual image;

The display module is for showing the three-dimensional scene images.

In conjunction with the third aspect, in the first possible implementation of the third aspect, the behavioral data acquisition module and The display module is the virtual helmet of wear-type.

In the present invention, after obtaining the video data and audio data acquired by on-site data gathering end, according to the view Frequency plays video data collected according to the corresponding three-dimensional scene images of generation in the three-dimensional scene images, according to User controls the broadcasting of the audio data in the viewing location of the three-dimensional scenic accordingly, enables a user to obtain More abundant field data is taken, atmosphere is more preferably broadcast live so as to create.

Detailed description of the invention

Fig. 1 is the implementation flow chart for the live broadcasting method based on mixed reality that first embodiment of the invention provides;

Fig. 2 is the implementation flow chart for the live broadcasting method based on mixed reality that second embodiment of the invention provides;

Fig. 3 is the implementation flow chart for the live broadcasting method based on mixed reality that third embodiment of the invention provides;

Fig. 4 is the structural schematic diagram for the live broadcast device based on mixed reality that fourth embodiment of the invention provides.

Specific embodiment

In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.

A kind of live ambiance effect for being designed to provide live streaming of the embodiment of the present invention is more preferably based on mixed reality Live broadcasting method, when solving to be broadcast live in the prior art, the voice data and audio data usually directly played is this to broadcast The mode of putting can not effectively restore audio-video field data, and user can not obtain the live atmosphere of live streaming.Below with reference to attached Figure, the present invention is further illustrated.

Embodiment one:

Fig. 1 shows the implementation process of the live broadcasting method based on mixed reality of first embodiment of the invention offer, is described in detail It is as follows:

In step s101, the video data and audio data acquired by on-site data gathering end is obtained.

Specifically, on-site data gathering end described in the embodiment of the present invention, can be competition field, concert, TV programme etc. Professional video camera used in equipment is broadcast live and with the microphone explained on site.The video data of the video camera and The audio data of microphone is sent to direct broadcast server by network, other user terminals can pass through after coding compression The mode of request, access server obtain the video data and audio data of live streaming.

Certainly, the video data and audio data can also be shown in by the camera that is connected with computer, Mike and be adopted The field data of collection, or can also be the field data of the equipment such as smart phone acquisition.

In embodiments of the present invention, acquired video data is two-dimensional image data, under normal circumstances, in the view Frequency generally includes main broadcaster's portrait in.According to the content of live streaming, the live streaming can be classified, for example may include knowing Know the live streaming of explanation type, the live streaming of performance type, the live streaming of race type and the live streaming of other TV programme explanation types etc..

In step s 102, it according to the video data, generates and the matched three-dimensional scene images of the video data.

In embodiments of the present invention, the three-dimensional scene images can be the pre-stored multiple three-dimensional scenes of user Picture can be matched after obtaining video data collected by on-site data gathering end according to the video data.Matching Method may include the ambient image of video data and the similarity calculation of three-dimensional scene images, when similarity is more than certain threshold When value, then it is assumed that the video data matches corresponding scene data.

Alternatively, the corresponding audio frequency characteristics of setting can also be preset in the three-dimensional scene images.When the three-dimensional scene It is then the Data Matching of the acquisition three-dimensional scenic when similarity of the audio data of the audio frequency characteristics and acquisition of picture is greater than certain value Image.

Certainly, the scene data can also be automatically generated according to the video data of acquisition.The method of generation can be with Corresponding three-dimensional scene images are automatically generated in conjunction with 3-D image Core Generator according to the image in the video data of acquisition.Or Person can also search corresponding three-dimensional scene images according to user-defined live streaming type.

For example, being the live streaming of performance type for live scene, the three-dimensional scene images of concert can be automatically generated, and will The video data of acquisition plays on the large screen and main platform in concert scene.Knowledge explanation type is broadcast live, Ke Yisheng At classroom scene, video data can be played in dais position.

In step s 103, video data collected is played in the three-dimensional scene images, and according to user in institute State the audio data that the position in three-dimensional scenic plays displaying.

In the three-dimensional scene images of generation, can at pre-set position playing video data.User can be The video data is watched in the three-dimensional scene images of generation, obtains more abundant field data, promotes showing for video playing Field atmosphere.

Also, the viewing location that the invention also includes users in three-dimensional scene images obtains sound corresponding with the position Frequency evidence.Wherein, viewing location of the user in three-dimensional scene images can be distributed according to the request of user.The scene The audio data changed, sound as corresponding with the position.For example, when the viewing location of user is at the A of position, then By calculating the relationship of the sound source position in scene data in user and three-dimensional scene images, corresponding control left and right The sound play time of sound channel, thus the sound effect at the scene of simulating.So as to further promote the effect of live atmosphere.

After the present invention is by obtaining the video data and audio data that are acquired by on-site data gathering end, according to the video Data generate corresponding three-dimensional scene images, and video data collected is played in the three-dimensional scene images, according to Family controls the broadcasting of the audio data in the viewing location of the three-dimensional scenic accordingly, enables a user to obtain Atmosphere is more preferably broadcast live so as to create in more abundant field data.

Embodiment two:

Fig. 2 shows the implementation processes for the live broadcasting method based on mixed reality that second embodiment of the invention provides, and are described in detail It is as follows:

In step s 201, the video data and audio data acquired by on-site data gathering end is obtained.

In step S202, according to the video data, generate and the matched three-dimensional scene images of the video data.

In step S203, the scene for receiving user enters request, according to the request, generates in the three-dimensional scenic Corresponding virtual image.

It in embodiments of the present invention, can in the three-dimensional scene images after generating corresponding three-dimensional scene images To include multiple virtual image positions, it is used to the user being linked into live broadcast system and distributes virtual image.Such as user After access, after can checking that the position for being currently at and can distributing state, user can choose oneself favorite position, the position is activated Set corresponding virtual image.

In step S204, the behavior state data of user are acquired, it is corresponding to control according to behavior state data collected It makes the virtual image in the three-dimensional scenic and executes corresponding movement.

After the virtual image of activation user, the behavior state data of user can also be acquired in real time.The behavior shape The acquisition of state data can be acquired by virtual implementing helmet or other sensing equipments.

After detecting the behavior state data of user, virtual image is controlled accordingly according to the behavior state data and is held Action is made.

Such as when detecting that user stretches out one's hand movement, can by the infrared sensor that is arranged on the virtual helmet, or according to Acceleration transducer detects the data such as direction, speed and the amplitude of the movement at the positions such as user's arm, arm.According to described Data adjust the corresponding position in virtual image accordingly, execute corresponding movement, for example the positions such as arm, wrist of hand are held The corresponding movement of row.

In the embodiment advanced optimized as the present invention, the method may also include that be shown in the three-dimensional scenic Show virtual image and movement of other users in the corresponding position of selection, the behavior state number of the movement and other users According to correspondence.

Correspondence in the three-dimensional scene images, when other users request to enter, in the three-dimensional scene images Position creation and generation virtual image, and the avatar information of generation is sent to server, it will be described virtual by server The information of image is sent in the other user terminals for checking live streaming, and institute is shown in the other user terminals for listening to live streaming State the action state of virtual image.

In step S205, video data collected is played in the three-dimensional scene images, and according to user in institute State the audio data that the position in three-dimensional scenic plays displaying.

The embodiment of the present invention on the basis of example 1, further to increasing user's itself in three-dimensional scene images Virtual image, or further include enabling a user to simultaneously in the virtual image for the other users for watching live streaming according to virtual image, Effect is more preferably broadcast live in enough obtain, and user is facilitated preferably to be interacted.

Embodiment three:

Fig. 3 shows the implementation process of the live broadcasting method based on mixed reality of third embodiment of the invention offer, is described in detail It is as follows:

In step S301, the video data and audio data acquired by on-site data gathering end is obtained.

In step s 302, it according to the video data, generates and the matched three-dimensional scene images of the video data.

In step S303, the image-region where the personage in the video data is detected.

Specifically, this step detects the image-region where the personage in video data, it can be according to scheduled item Part triggering.For example when detecting live streaming type is the live streaming types such as the live streaming of performance type, the live streaming of knowledge explanation type, then start to detect Character image in video data.

Furthermore it is also possible to receive the detection request of user, image detection is carried out in video data according to the request.

In step s 304, image interception is carried out according to the image-region where the personage, by truncated picture region It is played in the three-dimensional scene images, and plays the audio of displaying according to position of the user in the three-dimensional scenic Data.

It can be to video in conjunction with the change information in the character image region in image according to preset person model Character image in data carries out detection detection, obtains the image-region where personage, and intercept to the region.

When image-region after interception is merged with three-dimensional scene images, personage can be made to obtain preferably merging effect Fruit enables and watches the larger range of image for viewing scene data in conjunction with figure picture of the user of live streaming.

Example IV:

Fig. 4 shows the structural schematic diagram of the live broadcast device based on mixed reality of fourth embodiment of the invention offer, in detail It states as follows:

Based on the live broadcast device of mixed reality described in the embodiment of the present invention, comprising:

Data capture unit 401, for obtaining the video data and audio data that are acquired by on-site data gathering end;

Three-dimensional scene images generation unit 402, for generating matched with the video data according to the video data Three-dimensional scene images;

Data playback unit 403, for playing video data collected in the three-dimensional scene images, and according to Position of the family in the three-dimensional scenic plays the audio data of displaying.

Preferably, described device further include:

Virtual image generation unit, the scene for receiving user enter request, according to the request, in the three dimensional field Corresponding virtual image is generated in scape;

First action control display unit, for acquiring the behavior state data of user, according to behavior state collected Data control the virtual image in the three-dimensional scenic accordingly and execute corresponding movement.

Preferably, described device further include:

Second action control display unit shows other users in the corresponding position of selection in the three-dimensional scenic Virtual image and movement, the movement are corresponding with the behavior state data of other users.

Preferably, the data playback unit includes:

Image detection subelement, for detecting the image-region where the personage in the video data;

Image interception subelement, for carrying out image interception according to the image-region where the personage, by the figure of interception As region plays in the three-dimensional scene images.

Based on the live broadcast device of mixed reality described in the embodiment of the present invention, and mixed reality is based on described in embodiment one to three Live broadcasting method it is corresponding, so here is no more repetition.

In addition, the embodiment of the invention also provides a kind of live broadcast system based on mixed reality, the system comprises:

Data acquisition module, processor, display module, in which:

The behavioral data acquisition module is used to acquire the behavior state data of user, and by the behavior state of acquisition Data are sent to processor;

The processor is for receiving behavior state data collected, and receives and acquired by on-site data gathering end Video data and audio data generate corresponding three-dimensional scene images according to the video data of acquisition, in the three dimensional field The video data is played in scape image, and generates the virtual image of user in the three-dimensional scene images, according to being acquired Behavior state data, control the motion state of the virtual image;

The display module is for showing the three-dimensional scene images.

In conjunction with the third aspect, in the first possible implementation of the third aspect, the behavioral data acquisition module and The display module is the virtual helmet of wear-type.Certainly, not limited to this, the behavioral data acquisition module can also include setting It sets in hand, acceleration transducer of leg etc., the display module can also be the equipment such as virtual reality glasses.It is described to be based on The live broadcast system of mixed reality is corresponding with the live broadcasting method described in embodiment one to three based on mixed reality.

In several embodiments provided by the present invention, it should be understood that disclosed device and method can pass through it Its mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be tied Another system is closed or is desirably integrated into, or some features can be ignored or not executed.Another point, it is shown or discussed Mutual coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or logical of device or unit Letter connection can be electrical property, mechanical or other forms.

The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.

It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.

If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can store in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words It embodies, which is stored in a storage medium, including some instructions are used so that a computer Equipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment the method for the present invention Portion or part.And storage medium above-mentioned include: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), Random access memory (RAM, Random Access Memory), magnetic or disk etc. be various to can store program code Medium.

The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.

Claims (10)

1. a kind of live broadcasting method based on mixed reality, which is characterized in that the described method includes:
Obtain the video data and audio data acquired by on-site data gathering end;
According to the video data, generation and the matched three-dimensional scene images of the video data, specifically: obtain the video The audio frequency characteristics of data, and corresponding three-dimensional scenic is matched in the multiple three-dimensional scene images prestored according to the audio frequency characteristics Image;
Video data collected, and the position according to user in the three-dimensional scenic are played in the three-dimensional scene images Play the audio data of displaying.
2. method according to claim 1, which is characterized in that the method also includes:
The scene for receiving user enters request, and according to the request, corresponding virtual image is generated in the three-dimensional scenic;
The behavior state data of acquisition user control in the three-dimensional scenic accordingly according to behavior state data collected Virtual image execute corresponding movement.
3. method according to claim 2, which is characterized in that the method also includes:
Virtual image and movement of other users in the corresponding position of selection, the movement are shown in the three-dimensional scenic It is corresponding with the behavior state data of other users.
4. method according to claim 1, which is characterized in that described to play view collected in the three-dimensional scene images Frequency data step includes:
Detect the image-region where the personage in the video data;
Image-region where the personage carries out image interception, by truncated picture region in the three-dimensional scene images Middle broadcasting.
5. a kind of live broadcast device based on mixed reality, which is characterized in that described device includes:
Data capture unit, for obtaining the video data and audio data that are acquired by on-site data gathering end;
Three-dimensional scene images generation unit, for generating and the matched three dimensional field of the video data according to the video data Scape image, specifically: the audio frequency characteristics of the video data are obtained, and according to the audio frequency characteristics in the multiple three dimensional fields prestored Corresponding three-dimensional scene images are matched in scape image;
Data playback unit, for playing video data collected in the three-dimensional scene images, and according to user in institute State the audio data that the position in three-dimensional scenic plays displaying.
6. device according to claim 5, which is characterized in that described device further include:
Virtual image generation unit, the scene for receiving user enter request, according to the request, in the three-dimensional scenic Generate corresponding virtual image;
First action control display unit, for acquiring the behavior state data of user, according to behavior state data collected, The virtual image in the three-dimensional scenic is controlled accordingly executes corresponding movement.
7. device according to claim 6, which is characterized in that described device further include:
Second action control display unit shows other users in the virtual of the corresponding position of selection in the three-dimensional scenic Image and movement, the movement are corresponding with the behavior state data of other users.
8. device according to claim 5, which is characterized in that the data playback unit includes:
Image detection subelement, for detecting the image-region where the personage in the video data;
Image interception subelement, for carrying out image interception according to the image-region where the personage, by truncated picture area Domain plays in the three-dimensional scene images.
9. a kind of live broadcast system based on mixed reality, which is characterized in that the live broadcast system based on mixed reality includes row For data acquisition module, processor, display module, in which:
The behavioral data acquisition module is used to acquire the behavior state data of user, and by the behavior state data of acquisition It is sent to processor;
The processor is for receiving behavior state data collected, and the video that reception is acquired by on-site data gathering end Data and audio data generate corresponding three-dimensional scene images according to the video data of acquisition, specifically: obtain the view The audio frequency characteristics of frequency evidence, and corresponding three dimensional field is matched in the multiple three-dimensional scene images prestored according to the audio frequency characteristics Scape image plays the video data in the three-dimensional scene images, and generates user's in the three-dimensional scene images Virtual image controls the motion state of the virtual image according to behavior state data collected;
The display module is for showing the three-dimensional scene images.
10. system according to claim 9, which is characterized in that the behavioral data acquisition module and the display module are The virtual helmet of wear-type.
CN201610639734.3A 2016-08-05 2016-08-05 A kind of live broadcasting method based on mixed reality, device and system CN106303555B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610639734.3A CN106303555B (en) 2016-08-05 2016-08-05 A kind of live broadcasting method based on mixed reality, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610639734.3A CN106303555B (en) 2016-08-05 2016-08-05 A kind of live broadcasting method based on mixed reality, device and system

Publications (2)

Publication Number Publication Date
CN106303555A CN106303555A (en) 2017-01-04
CN106303555B true CN106303555B (en) 2019-12-03

Family

ID=57666044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610639734.3A CN106303555B (en) 2016-08-05 2016-08-05 A kind of live broadcasting method based on mixed reality, device and system

Country Status (1)

Country Link
CN (1) CN106303555B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106937154A (en) * 2017-03-17 2017-07-07 北京蜜枝科技有限公司 Process the method and device of virtual image
CN107635131B (en) * 2017-09-01 2020-05-19 北京雷石天地电子技术有限公司 Method and system for realizing virtual reality
CN107705316A (en) * 2017-09-20 2018-02-16 北京奇虎科技有限公司 Image capture device Real-time Data Processing Method and device, computing device
CN107743263A (en) * 2017-09-20 2018-02-27 北京奇虎科技有限公司 Video data real-time processing method and device, computing device
CN107613360A (en) * 2017-09-20 2018-01-19 北京奇虎科技有限公司 Video data real-time processing method and device, computing device
CN107592475A (en) * 2017-09-20 2018-01-16 北京奇虎科技有限公司 Video data handling procedure and device, computing device
CN107590817A (en) * 2017-09-20 2018-01-16 北京奇虎科技有限公司 Image capture device Real-time Data Processing Method and device, computing device
CN107633228A (en) * 2017-09-20 2018-01-26 北京奇虎科技有限公司 Video data handling procedure and device, computing device
CN107680170A (en) * 2017-10-12 2018-02-09 北京奇虎科技有限公司 View synthesis method and device based on virtual world, computing device
CN107680105A (en) * 2017-10-12 2018-02-09 北京奇虎科技有限公司 Video data real-time processing method and device, computing device based on virtual world
CN107613161A (en) * 2017-10-12 2018-01-19 北京奇虎科技有限公司 Video data handling procedure and device, computing device based on virtual world
CN108014490A (en) * 2017-12-29 2018-05-11 安徽创视纪科技有限公司 A kind of outdoor scene secret room based on MR mixed reality technologies
CN108492363B (en) * 2018-03-26 2020-03-10 Oppo广东移动通信有限公司 Augmented reality-based combination method and device, storage medium and electronic equipment
CN110602517A (en) * 2019-09-17 2019-12-20 腾讯科技(深圳)有限公司 Live broadcast method, device and system based on virtual environment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101032186A (en) * 2004-09-03 2007-09-05 P·津筥 Method and apparatus for producing a phantom three-dimensional sound space with recorded sound
CN103460256A (en) * 2011-03-29 2013-12-18 高通股份有限公司 Anchoring virtual images to real world surfaces in augmented reality systems
CN205071236U (en) * 2015-11-02 2016-03-02 徐文波 Wear -type sound video processing equipment
CN105653020A (en) * 2015-07-14 2016-06-08 朱金彪 Time traveling method and apparatus and glasses or helmet using same

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101055494B (en) * 2006-04-13 2011-03-16 上海虚拟谷数码科技有限公司 Dummy scene roaming method and system based on spatial index cube panoramic video
CN102737399A (en) * 2012-06-20 2012-10-17 北京水晶石数字科技股份有限公司 Method for roaming ancient painting
CN104869524B (en) * 2014-02-26 2018-02-16 腾讯科技(深圳)有限公司 Sound processing method and device in three-dimensional virtual scene

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101032186A (en) * 2004-09-03 2007-09-05 P·津筥 Method and apparatus for producing a phantom three-dimensional sound space with recorded sound
CN103460256A (en) * 2011-03-29 2013-12-18 高通股份有限公司 Anchoring virtual images to real world surfaces in augmented reality systems
CN105653020A (en) * 2015-07-14 2016-06-08 朱金彪 Time traveling method and apparatus and glasses or helmet using same
CN205071236U (en) * 2015-11-02 2016-03-02 徐文波 Wear -type sound video processing equipment

Also Published As

Publication number Publication date
CN106303555A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
US10075701B2 (en) Methods and apparatus for mapping at least one received image to a surface of a model in a manner that efficiently uses the image content as a texture
US9363569B1 (en) Virtual reality system including social graph
US9794541B2 (en) Video capture system control using virtual cameras for augmented reality
JP6576538B2 (en) Broadcast haptic effects during group events
US9430048B2 (en) Method and apparatus for controlling multi-experience translation of media content
US9253440B2 (en) Augmenting a video conference
US20180018944A1 (en) Automated object selection and placement for augmented reality
CN104012098B (en) The media experience of enhancing is provided using haptic technology
Wu et al. A dataset for exploring user behaviors in VR spherical video streaming
US9384588B2 (en) Video playing method and system based on augmented reality technology and mobile terminal
JP6436320B2 (en) Live selective adaptive bandwidth
US9026596B2 (en) Sharing of event media streams
US9751015B2 (en) Augmented reality videogame broadcast programming
KR102077108B1 (en) Apparatus and method for providing contents experience service
Zhang et al. An automated end-to-end lecture capture and broadcasting system
WO2016009864A1 (en) Information processing device, display device, information processing method, program, and information processing system
CN105430455B (en) information presentation method and system
WO2017181600A1 (en) Method and device for controlling overlay comment
US20120093481A1 (en) Intelligent determination of replays based on event identification
US10419818B2 (en) Method and apparatus for augmenting media content
WO2015025309A1 (en) System and method for real-time processing of ultra-high resolution digital video
KR20120080410A (en) Contents synchronization apparatus and method for providing synchronized interaction
WO2015078199A1 (en) Live interaction method and device, client, server and system
US20120092475A1 (en) Method, Apparatus And System For Implementing Interaction Between A Video And A Virtual Network Scene
CN102572539A (en) Automatic passive and anonymous feedback system

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20180613

Address after: 518000 Guangdong Shenzhen Nanshan District Nantou Street Nanshan Road 1088 South Garden maple leaf building 10L

Applicant after: Shenzhen morden century science and Technology Co., Ltd.

Address before: 518000, 7 floor, Fuli building, 1 KFA Road, Nanshan street, Nanshan District, Shenzhen, Guangdong.

Applicant before: Shenzhen bean Technology Co., Ltd.

Effective date of registration: 20180613

Address after: 518000 Guangdong Shenzhen Nanshan District Nantou Street Nanshan Road 1088 South Garden maple leaf building 10L

Applicant after: Shenzhen morden century science and Technology Co., Ltd.

Address before: 518000, 7 floor, Fuli building, 1 KFA Road, Nanshan street, Nanshan District, Shenzhen, Guangdong.

Applicant before: Shenzhen bean Technology Co., Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant