CN111698522A - Live system based on mixed reality - Google Patents

Live system based on mixed reality Download PDF

Info

Publication number
CN111698522A
CN111698522A CN201910184984.6A CN201910184984A CN111698522A CN 111698522 A CN111698522 A CN 111698522A CN 201910184984 A CN201910184984 A CN 201910184984A CN 111698522 A CN111698522 A CN 111698522A
Authority
CN
China
Prior art keywords
data
module
real
dimensional scene
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910184984.6A
Other languages
Chinese (zh)
Inventor
李金龙
赵德贤
孙铠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Competitive Times Technology Co ltd
Original Assignee
Beijing Competitive Times Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Competitive Times Technology Co ltd filed Critical Beijing Competitive Times Technology Co ltd
Priority to CN201910184984.6A priority Critical patent/CN111698522A/en
Publication of CN111698522A publication Critical patent/CN111698522A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a live broadcast system based on mixed reality, which relates to the technical field of live broadcast, and comprises the following components: the mobile terminal comprises a data acquisition module, a data compression module, a data transmission module, a cloud server, a data query module, a management module, a data processing module, a data storage module and a plurality of mobile terminals in communication connection with the data processing module. According to the invention, the data compression module is used for respectively carrying out video compression and audio compression on the collected video data and audio data of the real character and uploading the video data and audio data to the cloud server for decompression to form real three-dimensional scene data and storing the real three-dimensional scene data, then the data processing module is used for extracting the real three-dimensional scene data stored in the cloud server and a preset virtual three-dimensional scene for scene reconstruction to generate panoramic video data for the mobile terminal to receive and watch, and the problem that the reality experience of a user is reduced because the real character is presented in the virtual scene for live broadcast by using a green curtain is solved.

Description

Live system based on mixed reality
Technical Field
The invention relates to the technical field of live broadcasting, in particular to a live broadcasting system based on mixed reality.
Background
Mixed Reality (MR) is a further development of virtual reality technology that builds an interactive feedback information loop between the real world, the virtual world and the user by presenting virtual scene information in the real scene to enhance the realism of the user experience.
Nowadays, with the development of electronic information technology and the continuous progress of computer technology, people have more and more abundant daily entertainment activities. Among them, online live broadcast is popular with users due to its rich content selectivity and interactivity.
The current live broadcast method generally collects video data and audio data on site through a camera and a microphone respectively, and then directly transmits the audio data and the video data to a user terminal for playing. However, the above technical solutions have the following disadvantages in practical use: because the green screen is needed to present the real character in the virtual scene for live broadcasting, certain time delay can be generated in the live broadcasting, and the realistic experience of the user is reduced. Therefore, it is an urgent need to solve the problem of designing a live broadcast system based on mixed reality.
Disclosure of Invention
The invention aims to provide a live broadcast system based on mixed reality, and aims to solve the problem that the reality experience of a user is reduced due to the fact that a real person is presented in a virtual scene through a green screen for live broadcast in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
a mixed reality based live system comprising: the system comprises a data acquisition module, a data compression module, a data transmission module, a cloud server, a data query module, a management module, a data processing module, a data storage module and a plurality of mobile terminals which are in communication connection with the data processing module;
the data acquisition module is connected with the data processing module sequentially through the data compression module, the data transmission module and the cloud server, and the cloud server is also respectively connected with the data query module and the management module in a bidirectional mode;
the data acquisition module is used for acquiring real-time video data and audio data of a real person in real time; the data compression module is used for respectively carrying out video compression and audio compression on the video data and the audio data of the real figure acquired by the data acquisition module, uploading the video data and the audio data to the cloud server through the data transmission module, decompressing the video data and the audio data to form real three-dimensional scene data and storing the real three-dimensional scene data;
the data query module is used for querying and modifying the real three-dimensional scene data stored in the cloud server; the management module is used for monitoring the real three-dimensional scene data stored by the cloud server and automatically deleting the real three-dimensional scene data which are illegal and do not accord with the preset regulation;
the data processing module is used for extracting real three-dimensional scene data stored in the cloud server and a preset virtual three-dimensional scene to perform scene reconstruction to generate panoramic video data, acquiring video data and audio data of a real person (user) in real time through the data acquisition module, and updating the panoramic video data correspondingly according to the acquired video data and audio data;
the mobile terminal is used for establishing communication connection with the data processing module to receive the panoramic video data for watching.
As a further scheme of the invention: the real three-dimensional scene data comprises real character video data and audio data.
As a still further scheme of the invention: the scene reconstruction is to input real character video data in real three-dimensional scene data, position the real character video data according to a preset virtual three-dimensional scene, perform picture superposition according to the position of a real character in the virtual three-dimensional scene to generate a corresponding panoramic image, and then splice audio data in the corresponding real three-dimensional scene data and the panoramic image into panoramic video data.
As a still further scheme of the invention: the data storage module is used for storing the panoramic video data generated by the data processing module, and the panoramic video data generated by the data processing module is stored by the data storage module, so that the data can be prevented from being lost due to power failure, and the safety is effectively improved.
As a still further scheme of the invention: the data acquisition module comprises a video acquisition module and an audio acquisition module, the video acquisition module is used for acquiring real figure video data in real time, and the audio acquisition module is used for acquiring real figure audio data in real time.
As a still further scheme of the invention: the data compression module comprises a video compression module and an audio compression module.
As a still further scheme of the invention: the live broadcast system based on mixed reality further comprises a power supply module for supplying power to the system, and the data processing module is further connected with the data storage module and the power supply module respectively.
As a still further scheme of the invention: the audio acquisition module comprises a microphone, and the audio acquisition module is used for encoding and decoding audio digital signals input by the microphone to generate audio data; the microphones are a plurality of and are uniformly distributed on the outer side of the video acquisition module to form a ring.
Compared with the prior art, the invention has the beneficial effects that:
according to the invention, the data compression module is used for respectively carrying out video compression and audio compression on the collected video data and audio data of the real character and uploading the video data and audio data to the cloud server for decompression to form real three-dimensional scene data and storing the real three-dimensional scene data, then the data processing module is used for extracting the real three-dimensional scene data stored in the cloud server and a preset virtual three-dimensional scene for scene reconstruction to generate panoramic video data for the mobile terminal to receive and watch, so that the problem that the reality experience of a user is reduced because the real character is presented in the virtual scene for live broadcast by using a green curtain is solved, a high-performance local server is not required to be configured, and the method has a wide market prospect.
Drawings
Fig. 1 is a block diagram of a mixed reality based live system.
Fig. 2 is a block diagram of a data acquisition module in a mixed reality-based live broadcast system.
Fig. 3 is a block diagram of a data compression module in a mixed reality-based live system.
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
Referring to fig. 1, in an embodiment of the present invention, a mixed reality-based live broadcasting system includes: the mobile terminal comprises a data acquisition module, a data compression module, a data transmission module, a cloud server, a data query module, a management module, a data processing module, a data storage module and a plurality of mobile terminals in communication connection with the data processing module.
Specifically, the data acquisition module is connected with the data processing module sequentially through the data compression module, the data transmission module and the cloud server, and the cloud server is further connected with the data query module and the management module in a bidirectional mode.
Further, in the embodiment of the present invention, the data acquisition module is configured to acquire real-world person video data and real-world person audio data in real time; the data compression module is used for respectively carrying out video compression and audio compression on the video data and the audio data of the real character acquired by the data acquisition module, uploading the video data and the audio data to the cloud server through the data transmission module, decompressing the video data and the audio data to form real three-dimensional scene data and storing the real three-dimensional scene data.
Specifically, the data compression module is used for carrying out video compression on the real figure video data collected by the data collection module and generating compressed images with different alternative playing code rates; decompressing the compressed image through a cloud server to analyze and obtain original reality figure video data; performing audio compression on the real character audio data acquired by the data acquisition module through the data compression module to generate compressed audio data; and decompressing the compressed audio data through the cloud server to obtain the audio data of the original real character.
Further, in the embodiment of the present invention, the real three-dimensional scene data includes real character video data and audio data; the data query module is used for querying and modifying the real three-dimensional scene data stored in the cloud server; the management module is used for monitoring the real three-dimensional scene data stored in the cloud server and automatically deleting the real three-dimensional scene data which are illegal and do not accord with the preset regulations.
Further, in the embodiment of the present invention, the data processing module is configured to extract real three-dimensional scene data stored in the cloud server and a preset virtual three-dimensional scene for scene reconstruction to generate panoramic video data, acquire video data and audio data of a real person (user) in real time through the data acquisition module, and update the panoramic video data according to the acquired video data and audio data.
Further, in the embodiment of the present invention, the scene is reconstructed into real character video data in the input real three-dimensional scene data and is positioned according to a preset virtual three-dimensional scene, frames are superimposed according to the position of the real character in the virtual three-dimensional scene to generate a corresponding panoramic image, and then audio data in the corresponding real three-dimensional scene data and the panoramic image are spliced into panoramic video data.
Furthermore, in the embodiment of the present invention, the data storage module is configured to store the panoramic video data generated by the data processing module, and the panoramic video data generated by the data processing module is stored by the data storage module, so that data loss due to power failure can be prevented, and the security is effectively improved; the data storage module may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory ((ROM), magnetic memory, flash memory, magnetic or optical disks.
Further, in the embodiment of the present invention, the mobile terminal is configured to establish a communication connection with a data processing module to receive panoramic video data for watching, the mobile terminal may be a combination of one or more of a smart phone, a tablet computer, a palmtop computer, a notebook computer, or a vehicle-mounted computer, the mobile terminal first needs to download App software for watching the panoramic video data generated by the data processing module, the App software can establish a communication connection between the mobile terminal and the data processing module, and after establishing a communication connection between the mobile terminal and the data processing module, the mobile terminal watches the panoramic video data through live broadcast parameter setting, where the communication connection may be established through a mobile network, wireless WiFi, or wireless bluetooth.
Further, in the embodiment of the present invention, the cloud server includes a memory, and the real three-dimensional scene data that is formed by the uploading to the cloud server and the decompressing is stored by the memory, and the memory may be implemented by any type of volatile or non-volatile memory device or a combination thereof, such as a Static Random Access Memory (SRAM), an electrically erasable programmable read-only memory (EEPROM), an erasable programmable read-only memory (EPROM), a programmable read-only memory (PROM), a read-only memory ((ROM), a magnetic memory, a flash memory, a magnetic disk or an optical disk.
Specifically, the memory is accessed through a Web service Application Program Interface (API) or a Web user interface, namely, data are stored in a plurality of virtual servers which are generally managed by a third party, but not exclusive servers, and the occupied space of the data can be effectively reduced through the memory of the cloud server, so that the work efficiency is improved.
Referring to fig. 2, in another embodiment provided by the present invention, the data acquisition module includes a video acquisition module and an audio acquisition module, the video acquisition module is used for acquiring real-world character video data in real time, and the audio acquisition module is used for acquiring real-world character audio data in real time.
Referring to fig. 3, in another embodiment of the present invention, the data compression module includes a video compression module and an audio compression module; the video compression module is used for carrying out video compression on the real figure video data acquired by the data acquisition module to generate compressed images with different alternative playing code rates, and the compressed images are decompressed by the cloud server to be analyzed to obtain original real figure video data; the audio compression module is used for carrying out audio compression on the real figure audio data collected by the data collection module and generating compressed audio data, and the compressed audio data are decompressed through the cloud server to obtain original real figure audio data.
Referring to fig. 1-3, in another embodiment provided by the present invention, the mixed reality based live broadcast system further includes a power module for supplying power to the system, and the power module may be an ac power, a dc power, a disposable battery, or a rechargeable battery. When the power module includes a rechargeable battery, the rechargeable battery may support wired charging or wireless charging; the data processing module is also respectively connected with the data storage module and the power supply module.
Further, in the embodiment of the present invention, the audio acquisition module includes a microphone, and the audio acquisition module encodes and decodes an audio digital signal input by the microphone to generate audio data; the microphones are a plurality of and are uniformly distributed on the outer side of the video acquisition module to form a ring, so that sound pickup at all angles can be realized.
It will be understood by those skilled in the art that the modules shown in fig. 1-3 are only block diagrams of portions relevant to the present disclosure, and do not constitute a limitation of the mixed reality based live system described in the present disclosure, and a particular mixed reality based live system may include more or less components than those shown in the figures, or combine certain components, or have a different arrangement of components.
In another embodiment provided by the present invention, the video capture module may be a 360 degree VR panoramic camera or a binocular camera.
For example, when adopting 360 degrees VR panoramic camera, can replace many camera cameras of shaking the head, because VR panoramic camera compares with the ordinary surveillance camera head of wide angle, has wider monitoring range, and then has effectively reduced occupation space.
For example, when a binocular camera is used, two viewpoint images of the same scene, namely the left viewpoint image and the right viewpoint image, can be shot by the binocular camera, a disparity map is obtained by using a stereo matching algorithm, a depth map is further obtained, the distance between an object in the scene and the binocular camera is recorded by the depth map, and the stereo matching algorithm is further used for measurement, three-dimensional reconstruction, synthesis of virtual viewpoints and the like.
For example, in live game, a game scene can be matched with a real scene by using a depth information value of a binocular camera, then a new composite pattern is made, the composite pattern is live broadcast, the purpose that a real character and a virtual reality scene can be matched and placed in one scene for live broadcast without using a green screen is achieved, the real character video data in real three-dimensional scene data is input and positioned according to a preset virtual three-dimensional scene, picture superposition is carried out according to the position of the real character in the virtual three-dimensional scene to generate a corresponding panoramic image, then audio data in the corresponding real three-dimensional scene data and the panoramic image are spliced into panoramic video data, and the unrelated parts in the system are the same as the prior art or can be realized by adopting the prior art.
In another embodiment provided by the invention, a mixed reality-based live broadcasting method comprises the following steps:
real-time acquisition of video data and audio data of real persons;
respectively performing video compression and audio compression on the collected video data and audio data of the real character, uploading the video data and the audio data to a cloud server, decompressing the video data and the audio data to form real three-dimensional scene data, and storing the real three-dimensional scene data, wherein the real three-dimensional scene data comprises the video data and the audio data of the real character;
monitoring real three-dimensional scene data stored in a cloud server, and automatically deleting the real three-dimensional scene data which are illegal and do not accord with preset regulations;
extracting real three-dimensional scene data stored in a cloud server and a preset virtual three-dimensional scene to perform scene reconstruction to generate panoramic video data;
and receiving the panoramic video data through the mobile terminal for watching.
The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps are not necessarily performed at the same time, but may be performed at different times, and these steps are not necessarily performed in sequence, but may be performed alternately or alternately with other steps.
The invention has the beneficial effects that: according to the invention, the data compression module is used for respectively carrying out video compression and audio compression on the collected video data and audio data of the real character and uploading the video data and audio data to the cloud server for decompression to form real three-dimensional scene data and storing the real three-dimensional scene data, then the data processing module is used for extracting the real three-dimensional scene data stored in the cloud server and a preset virtual three-dimensional scene for scene reconstruction to generate panoramic video data for the mobile terminal to receive and watch, so that the problem that the reality experience of a user is reduced because the real character is presented in the virtual scene for live broadcast by using a green curtain is solved, a high-performance local server is not required to be configured, and the method has a wide market prospect.
In the several embodiments provided by the present invention, it should be understood that the described embodiments are merely illustrative, for example, the division of the modules is only one logical function division, and there may be other division manners in actual implementation, for example, a plurality of modules may be combined or may be integrated together, or some modules may be omitted, and some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In another aspect, the communication connections described may be communication connections through some interfaces, devices or units.
From the above description of the embodiments, it is clear for those skilled in the art that the present invention can be implemented by software plus necessary general hardware, and those skilled in the art can understand that all or part of the processes in the methods of the above embodiments can be implemented by a computer program, which can be stored in a computer readable storage medium, and the program can include the processes of the embodiments of the methods when executed. The storage medium may be a random access memory, a flash memory, a read only memory, a programmable read only memory, an electrically erasable programmable memory, a register, etc.
While the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications of the invention may be made without departing from the scope of the invention.

Claims (7)

1. A mixed reality based live system, comprising: the system comprises a data acquisition module, a data compression module, a data transmission module, a cloud server, a data query module, a management module, a data processing module, a data storage module and a plurality of mobile terminals which are in communication connection with the data processing module;
the data acquisition module is connected with the data processing module sequentially through the data compression module, the data transmission module and the cloud server, and the cloud server is also respectively connected with the data query module and the management module in a bidirectional mode;
the data acquisition module is used for acquiring real-time video data and audio data of a real person in real time; the data compression module is used for respectively carrying out video compression and audio compression on the video data and the audio data of the real figure acquired by the data acquisition module, uploading the video data and the audio data to the cloud server through the data transmission module, decompressing the video data and the audio data to form real three-dimensional scene data and storing the real three-dimensional scene data;
the data query module is used for querying and modifying the real three-dimensional scene data stored in the cloud server; the management module is used for monitoring the real three-dimensional scene data stored by the cloud server;
the data processing module is used for extracting real three-dimensional scene data stored in the cloud server and a preset virtual three-dimensional scene to perform scene reconstruction to generate panoramic video data;
the mobile terminal is used for establishing communication connection with the data processing module to receive the panoramic video data for watching.
2. The mixed reality based live system of claim 1, wherein the real three-dimensional scene data comprises real character video data and audio data.
3. The mixed reality-based live broadcast system according to claim 2, wherein the scene is reconstructed into real character video data in input real three-dimensional scene data and is positioned according to a preset virtual three-dimensional scene, then the corresponding panoramic image is generated by performing image superposition according to the position of a real character in the virtual three-dimensional scene, and then the audio data in the corresponding real three-dimensional scene data and the panoramic image are spliced into panoramic video data.
4. The mixed reality-based live broadcasting system of claim 2, wherein the data storage module is configured to store the panoramic video data generated by the data processing module.
5. The mixed reality based live broadcast system of any one of claims 1-4, wherein the data acquisition module comprises a video acquisition module and an audio acquisition module, the video acquisition module is used for acquiring real-person video data in real time, and the audio acquisition module is used for acquiring real-person audio data in real time.
6. The mixed reality based live system of any one of claims 1-4, wherein the data compression module comprises a video compression module and an audio compression module.
7. The mixed reality based live broadcast system according to claim 6, further comprising a power module for supplying power to the system, wherein the data processing module is further connected to the data storage module and the power module, respectively.
CN201910184984.6A 2019-03-12 2019-03-12 Live system based on mixed reality Pending CN111698522A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910184984.6A CN111698522A (en) 2019-03-12 2019-03-12 Live system based on mixed reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910184984.6A CN111698522A (en) 2019-03-12 2019-03-12 Live system based on mixed reality

Publications (1)

Publication Number Publication Date
CN111698522A true CN111698522A (en) 2020-09-22

Family

ID=72475480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910184984.6A Pending CN111698522A (en) 2019-03-12 2019-03-12 Live system based on mixed reality

Country Status (1)

Country Link
CN (1) CN111698522A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112235591A (en) * 2020-10-15 2021-01-15 深圳市歌华智能科技有限公司 Virtual reality live broadcast distribution platform
CN113038262A (en) * 2021-01-08 2021-06-25 深圳市智胜科技信息有限公司 Panoramic live broadcast method and device
CN115442658A (en) * 2022-08-04 2022-12-06 珠海普罗米修斯视觉技术有限公司 Live broadcast method and device, storage medium, electronic equipment and product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104539925A (en) * 2014-12-15 2015-04-22 北京邮电大学 3D scene reality augmentation method and system based on depth information
CN105654471A (en) * 2015-12-24 2016-06-08 武汉鸿瑞达信息技术有限公司 Augmented reality AR system applied to internet video live broadcast and method thereof
CN106303555A (en) * 2016-08-05 2017-01-04 深圳市豆娱科技有限公司 A kind of live broadcasting method based on mixed reality, device and system
CN207150751U (en) * 2017-07-23 2018-03-27 供求世界科技有限公司 A kind of AR systems for network direct broadcasting
CN207460313U (en) * 2017-12-04 2018-06-05 上海幻替信息科技有限公司 Mixed reality studio system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104539925A (en) * 2014-12-15 2015-04-22 北京邮电大学 3D scene reality augmentation method and system based on depth information
CN105654471A (en) * 2015-12-24 2016-06-08 武汉鸿瑞达信息技术有限公司 Augmented reality AR system applied to internet video live broadcast and method thereof
CN106303555A (en) * 2016-08-05 2017-01-04 深圳市豆娱科技有限公司 A kind of live broadcasting method based on mixed reality, device and system
CN207150751U (en) * 2017-07-23 2018-03-27 供求世界科技有限公司 A kind of AR systems for network direct broadcasting
CN207460313U (en) * 2017-12-04 2018-06-05 上海幻替信息科技有限公司 Mixed reality studio system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112235591A (en) * 2020-10-15 2021-01-15 深圳市歌华智能科技有限公司 Virtual reality live broadcast distribution platform
CN113038262A (en) * 2021-01-08 2021-06-25 深圳市智胜科技信息有限公司 Panoramic live broadcast method and device
CN115442658A (en) * 2022-08-04 2022-12-06 珠海普罗米修斯视觉技术有限公司 Live broadcast method and device, storage medium, electronic equipment and product
CN115442658B (en) * 2022-08-04 2024-02-09 珠海普罗米修斯视觉技术有限公司 Live broadcast method, live broadcast device, storage medium, electronic equipment and product

Similar Documents

Publication Publication Date Title
CN104012106B (en) It is directed at the video of expression different points of view
CN113038287B (en) Method and device for realizing multi-user video live broadcast service and computer equipment
CN106534757B (en) Face exchange method and device, anchor terminal and audience terminal
WO2016150317A1 (en) Method, apparatus and system for synthesizing live video
US20130101162A1 (en) Multimedia System with Processing of Multimedia Data Streams
CN111698522A (en) Live system based on mixed reality
CN107205122A (en) The live camera system of multiresolution panoramic video and method
CN107302711B (en) Processing system of media resource
WO2018094866A1 (en) Unmanned aerial vehicle-based method for live broadcast of panorama, and terminal
CN112714327B (en) Interaction method, device and equipment based on live application program and storage medium
CN110809173B (en) Virtual live broadcast method and system based on AR augmented reality of smart phone
CN103369289A (en) Communication method of video simulation image and device
CN109242940B (en) Method and device for generating three-dimensional dynamic image
CN107995482B (en) Video file processing method and device
CN104065951A (en) Video shooting method, video playing method and intelligent glasses
US20170225077A1 (en) Special video generation system for game play situation
CN112492231B (en) Remote interaction method, device, electronic equipment and computer readable storage medium
CN105472374A (en) 3D live video realization method, apparatus, and system
CN113794844B (en) Free view video acquisition system, method, device, server and medium
CN108093300A (en) Motion capture manages system
CN106060609B (en) Obtain the method and device of picture
CN112437332B (en) Playing method and device of target multimedia information
CN114827647A (en) Live broadcast data generation method, device, equipment, medium and program product
CN109479147B (en) Method and technical device for inter-temporal view prediction
CN115086730B (en) Subscription video generation method, subscription video generation system, computer equipment and subscription video generation medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200922

RJ01 Rejection of invention patent application after publication