CN111147930A - Data output method and system based on virtual reality - Google Patents

Data output method and system based on virtual reality Download PDF

Info

Publication number
CN111147930A
CN111147930A CN201911397336.5A CN201911397336A CN111147930A CN 111147930 A CN111147930 A CN 111147930A CN 201911397336 A CN201911397336 A CN 201911397336A CN 111147930 A CN111147930 A CN 111147930A
Authority
CN
China
Prior art keywords
data stream
stream
output
audio data
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911397336.5A
Other languages
Chinese (zh)
Inventor
周清会
曹延杰
汤代理
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Manheng Digital Technology Co ltd
Original Assignee
Shanghai Manheng Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Manheng Digital Technology Co ltd filed Critical Shanghai Manheng Digital Technology Co ltd
Priority to CN201911397336.5A priority Critical patent/CN111147930A/en
Publication of CN111147930A publication Critical patent/CN111147930A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests

Abstract

The application relates to a data output method and a system based on virtual reality, wherein the data output method based on virtual reality comprises the steps that a first server acquires current image data information of a preset device and forms a video data stream to be output through a first push communication service, and a second server acquires current audio data information of the preset device and forms an audio data stream to be output through a second push communication service; and reading the video data stream and decoding the video data stream to form the video data output through a first pull stream service, and reading the audio data stream and decoding the audio data stream to form the audio data output through a second pull stream service.

Description

Data output method and system based on virtual reality
Technical Field
The application relates to the technical field of virtual reality, in particular to a data output method and system based on virtual reality.
Background
The virtual reality technology (VR) integrates the technologies of computer three-dimensional model processing, three-dimensional display, natural man-machine interaction, electronic information, simulation and the like, and a virtual simulation environment with high reality sense is simulated by a computer, so that the environment immersion sense is brought to people. Immersive virtual reality (immergevevr) provides a fully immersive experience for the participants, giving the user a visual experience that is self-contained in the virtual world. With the updating iteration of the technology, VR live broadcast is popularized and used, and the existing common two-dimensional plane live broadcast can be upgraded to a VR panoramic live broadcast mode by using a specific panoramic shooting device or equipment and a software system. The watching visual angle is no longer limited in a fixed screen frame, and changes along with the change of the visual angle, so that a brand new visual experience is brought to people, the video content has a brand new display form, in the prior art, in the VR live broadcasting process, an audio file and a video file need to be output simultaneously, in the actual transmission process, the audio file and the video file occupy more bandwidth resources, and are easy to cause severe jamming under the state of limited bandwidth resources, in addition, the live broadcasting protocol in the prior art is realized by protocols such as rtmp and hls, the delay of the rtmp protocol is usually about 3 seconds, the transmission delay of the hls protocol is usually about 10 seconds, the live broadcasting purpose is realized by adopting the protocol, the delay phenomenon is easy to occur, so that the smoothness of the live broadcasting is reduced, and the user experience is greatly reduced.
Disclosure of Invention
According to a first aspect of some embodiments of the present application, there is provided a virtual reality-based data output method, comprising,
the method comprises the steps that a first server obtains current image data information of a preset device and forms a video data stream to be output through a first push communication service, and a second server obtains current audio data information of the preset device and forms an audio data stream to be output through a second push communication service;
and reading the video data stream and decoding the video data stream to form the video data output through a first pull stream service, and reading the audio data stream and decoding the audio data stream to form the audio data output through a second pull stream service.
Preferably, the virtual reality-based data output method further includes a first predetermined data stream tag and a second predetermined data stream tag, where the first server reads the video data stream and decodes the output through the first pull service, and the second server reads the audio data stream and decodes the output through the second pull service specifically includes:
the first pull stream service reads the video data stream through the first preset data stream label, and performs decoding processing on the video data stream to form the video data output;
and the second pull stream service reads the audio data stream through the second preset data stream label and performs decoding processing on the audio data stream to form the audio data output.
Preferably, the data output method based on virtual reality further includes:
and forming a data mark output after the audio data and the video data are output.
Preferably, the data output method based on virtual reality further includes:
and receiving the data mark read by the external equipment to form a connection request, and realizing data interaction between the external equipment and the feature server under the condition that the connection request is verified.
In another aspect, the present invention further provides a data output system based on virtual reality, which includes a first server and a second server
The first server acquires current image data information of a preset device through a first push communication service and forms a video data stream, or reads the video data stream through a first pull stream service and decodes the video data stream to form video data which is output to the characteristic server;
the second server acquires the current audio data information of the preset device through a second push communication service and forms an audio data stream; or reading the audio data stream through a second pull stream service and decoding to form the audio data to be output to the feature server.
Preferably, the virtual reality-based data output system further comprises a first predetermined data flow label, a second predetermined data flow label, and in particular,
the first pull stream service reads the video data stream through the first preset data stream label, and performs decoding processing on the video data stream to form the video data output;
and the second pull stream service reads the audio data stream through the second preset data stream label and performs decoding processing on the audio data stream to form the audio data output.
Preferably, the virtual reality-based data output system further includes:
and the receiving unit is used for receiving the data mark read by the external equipment to form a connection request and realizing data interaction between the external equipment and the feature server under the condition that the connection request is verified.
In another aspect, an embodiment of the present invention further provides a server, where the apparatus includes:
one or more processors;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processors, cause the one or more processors to perform
A virtual reality based data output method as described in any of the above.
In yet another aspect, embodiments of the present invention further provide a storage medium containing computer-executable instructions, wherein the computer-executable instructions, when executed by a computer processor, are configured to perform a virtual reality-based data output method as described in any one of the above.
In the invention, the bidirectional data transmission is realized through the websocket protocol, the defect of live broadcast delay can be greatly reduced, the existing live broadcast delay can be reduced to 0.5 second, and in addition, the calculation amount of the server is greatly reduced through the independent processing of the audio file and the video file, so that the problem of blocking and pause of audio and video data transmission is solved. Furthermore, in the implementation, the audio and video data in the head display device are acquired in the live broadcasting process, and the audio and video data in the head display device are output to the external device, so that a user who does not wear the head display device can conveniently know the current spatial environment of an experiencer in real time, and the interest of live broadcasting is further improved. The method overcomes the defect that the prior art can only live broadcast two-dimensional pictures.
Drawings
For a better understanding and appreciation of some embodiments of the present application, reference will now be made to the description of embodiments taken in conjunction with the accompanying drawings, in which like reference numerals designate corresponding parts in the figures.
FIG. 1 is an exemplary schematic diagram of a network environment system provided in accordance with some embodiments of the present application;
fig. 2 is a data output method based on virtual reality according to an embodiment of the present application.
Detailed Description
The following description, with reference to the accompanying drawings, is provided to facilitate a comprehensive understanding of various embodiments of the application as defined by the claims and their equivalents. These embodiments include various specific details for ease of understanding, but these are to be considered exemplary only. Accordingly, those skilled in the art will appreciate that various changes and modifications may be made to the various embodiments described herein without departing from the scope and spirit of the present application. In addition, descriptions of well-known functions and constructions will be omitted herein for brevity and clarity.
The terms and phrases used in the following specification and claims are not to be limited to the literal meaning, but are merely for the clear and consistent understanding of the application. Accordingly, it will be appreciated by those skilled in the art that the description of the various embodiments of the present application is provided for illustration only and not for the purpose of limiting the application as defined by the appended claims and their equivalents.
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the accompanying drawings in some embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is to be understood that the terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only, and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. The expressions "first", "second", "the first" and "the second" are used for modifying the corresponding elements without regard to order or importance, and are used only for distinguishing one element from another element without limiting the corresponding elements.
Example one
A terminal according to some embodiments of the present application may be an electronic device, which may include one or a combination of several of a virtual reality device (VR), a renderer, a personal computer (PC, e.g., tablet, desktop, notebook, netbook, PDA), a smart phone, a mobile phone, an e-book reader, a Portable Multimedia Player (PMP), an audio/video player (MP 3/MP 4), a camera, and a wearable device, etc. According to some embodiments of the present application, the wearable device may include an accessory type (e.g., watch, ring, bracelet, glasses, or Head Mounted Device (HMD)), an integrated type (e.g., electronic garment), a decorative type (e.g., skin pad, tattoo, or built-in electronic device), and the like, or a combination of several. In some embodiments of the present application, the electronic device may be flexible, not limited to the above devices, or may be a combination of one or more of the above devices. In this application, the term "user" may indicate a person using an electronic device or a device using an electronic device (e.g., an artificial intelligence electronic device).
The embodiment of the application provides a control method of a virtual reality multi-channel immersive environment. In order to facilitate understanding of the embodiments of the present application, the embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 1 is an exemplary schematic diagram of a network environment system 100 provided in accordance with some embodiments of the present application. As shown in fig. 1, the network environment system 100 may include an electronic device 110, a network 120, a server 130, and the like. The electronic device 110 may include a bus 111, a processor 112, a memory 113, an input/output module 114, a listener 115, a communication module 116, a client 117, and the like. In some embodiments of the present application, electronic device 110 may omit one or more elements, or may further include one or more other elements.
The bus 111 may include circuitry. The circuitry may interconnect one or more elements within electronic device 110 (e.g., bus 111, processor 112, memory 113, input/output module 114, listener 115, communication module 116, and client 117). The circuitry may also enable communication (e.g., obtain and/or transmit information) between one or more elements within electronic device 110.
The Processor 112 may include one or more Co-processors (Co-processors), Application Processors (APs), and Communication processors (communications processors). As an example, processor 112 may perform control and/or data processing (e.g., initiate case content, etc.) operations with one or more elements of electronic device 110.
The memory 113 may store data. The data may include instructions or data related to one or more other elements in electronic device 110. For example, the data may include raw data, intermediate data, and/or processed data prior to processing by processor 112. The memory 113 may include non-persistent memory and/or persistent memory. As an example, the memory 113 may be an immersive environment parameter or the like. The physical attribute information of the immersive environment parameter may include, but is not limited to, one or a combination of rendering machine information, screen information, tracking information, and the like.
According to some embodiments of the present application, the memory 113 may store software and/or programs. The programs may include kernels, middleware, Application Programming Interfaces (APIs), and/or Application programs (or "applications"). By way of example, memory 113 may store applications for client 117, listener 115, and the like.
The input/output module 114 may transmit instructions or data input from a user or an external device to other elements of the electronic device 110. Input/output module 114 may also output instructions or data obtained from other elements of electronic device 110 to a user or an external device. In some embodiments, the input/output module 114 may include an input unit through which a user may input information or instructions. As an example, the user may select a profile or case content at the client 117 main interface.
The listening end 115 may receive commands and feed back information. In some embodiments, the listening terminal 115 may receive a start command of the client 117 and feed back information to the client 117. As an example, the listener 115 may check the presence or absence of the configuration file and the case content, and feed back information to the client 117. As another example, listening end 115 may verify that the textual content of the client 117 profile is consistent with the local profile.
The communication module 116 may configure communication between devices. In some embodiments, the network environment system 100 may further include one or more other electronic devices 140. By way of example, the communication between the devices may include communication between the electronic device 110 and other devices (e.g., the server 130 or the electronic device 140). For example, the communication module 116 may be connected to the network 120 through wireless communication or wired communication to enable communication with other devices (e.g., the server 130 or the electronic device 140). As an example, the client 117 may send a start command to the listener 115 through a User Datagram Protocol (UDP). As another example, the listener 115 may feed back information to the client 117 via a user datagram protocol.
The client 117 may be used for user interaction. In some embodiments, the user may select the configuration files and/or case content at the client 117 main interface. In some embodiments, the client 117 may send the profile path and the case content path to the listeners 115 of the N devices according to the N IP addresses of the profile. As an example, the client 117 may package and send the text content of the configuration file to the listener 115 according to the feedback information of the presence or absence of the configuration file. For another example, the client 117 may send the start command to the listener 115 via a user datagram protocol.
Network 120 may include a communication network. The communication Network may comprise a computer Network (e.g., a Local Area Network (LAN) or Wide Area Network (WAN)), the internet and/or a telephone Network, etc., or a combination of several. Network 120 may send information to other devices in network environment system 100 (e.g., electronic device 110, server 130, electronic device 140, etc.).
Server 130 may be connected to other devices (e.g., electronic device 110, electronic device 140, etc.) in network environment system 100 via network 120. In some embodiments, the client 117 starts a local distribution server, compresses the case content to a specified path of the server, and sends a download command carrying the specified path to the listener 115.
Electronic device 140 may be the same or different type than electronic device 110. According to some embodiments of the present application, some or all of the operations performed in the electronic device 110 may be performed in another device or devices (e.g., the electronic device 140 and/or the server 130). In some embodiments, when electronic device 110 performs one or more functions and/or services automatically or in response to a request, electronic device 110 may request other devices (e.g., electronic device 140 and/or server 130) to perform the functions and/or services instead. In some embodiments, electronic device 110 performs one or more functions associated therewith in addition to performing the function or service. In some embodiments, other devices (e.g., electronic device 140 and/or server 130) may perform the requested function or other related function or functions and may transmit the results of the performance to electronic device 110. The electronic device 110 may repeat the execution or further process the execution to provide the requested function or service. By way of example, the electronic device 110 may use cloud computing, distributed technology, and/or client-server computing, among others, or a combination of several. In some embodiments, the cloud computing may include public clouds, private clouds, hybrid clouds, and the like, depending on the nature of the cloud computing service. In some embodiments, while electronic device 110 may be a master device, one or more other electronic devices 140 may be controlled devices. In some embodiments, electronic device 110 and other electronic devices 140 may establish a connection,
it should be noted that the above description of the network environment system 100 is merely for convenience of description and is not intended to limit the scope of the present application. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the principles of the system, which may be combined in any manner or combined with other elements to form a subsystem for use in a field of application in which the method and system described above is practiced. For example, the network environment system 100 may further include a database or the like. Such variations are within the scope of the present application.
Based on the network environment, the network environment comprises a live broadcast server, when the live broadcast server is activated or started, the live broadcast server starts a bidirectional data transmission service, and the data output method based on virtual reality is realized through the bidirectional data transmission service,
as shown in fig. 2, in step S110, a first server obtains current image data information of a predetermined device and forms a video data stream to be output through a first push communication service, and a second server obtains current audio data information of the predetermined device and forms an audio data stream to be output through a second push communication service; and under the condition that the handshake between the first server and the second server and the live broadcast server is successful, the bidirectional data transmission between the first server and/or the second server and the live broadcast server can be realized. The first push communication service can be realized by adopting a websocket protocol, and the second push communication service can be realized by adopting the websocket protocol.
The preset device can be a head display device, the first server obtains video data information played currently in the head display device and performs coding operation to form a video data stream, and the second server obtains audio data information played currently in the head display device and performs coding operation to the audio data information to form an audio data stream.
And step S120, reading the video data stream through a first pull stream service and decoding to form the video data output, and reading the audio data stream through a second pull stream service and decoding to form the audio data output. Wherein, the feature server can be a live broadcast server. And the live broadcast server receives and displays the video data through the first pull stream service, and receives and plays the audio data through the second pull stream service.
In the invention, the bidirectional data transmission is realized through the websocket protocol, the defect of live broadcast delay can be greatly reduced, the existing live broadcast delay can be reduced to 0.5 second, and in addition, the calculation amount of the server is greatly reduced through the independent processing of the audio file and the video file, so that the problem of blocking and pause of audio and video data transmission is solved. Furthermore, in the implementation, the audio and video data in the head display device are acquired in the live broadcasting process, and the audio and video data in the head display device are output to the external device, so that a user who does not wear the head display device can conveniently know the current spatial environment of an experiencer in real time, and the interest of live broadcasting is further improved. The method overcomes the defect that the prior art can only live broadcast two-dimensional pictures.
As a further preferred embodiment, the virtual reality-based data output method further includes a first predetermined data stream tag and a second predetermined data stream tag, step S120, the first server reads the video data stream and decodes the video data stream for output through the first pull stream service, and the second server reads the audio data stream and decodes the audio data stream for output through the second pull stream service specifically includes:
step S1201, the first pull stream service reads the video data stream through the first preset data stream label, and performs decoding processing on the video data stream to form the video data output;
step S1202, the second pull stream service reads the audio data stream through the second predetermined data stream tag, and performs decoding processing on the audio data stream to form the audio data output.
As a further preferred embodiment, the virtual reality-based data output method further includes:
step S130, forming a data flag output after the audio data and the video data are output. Wherein, the data mark can be output in a two-dimensional code form.
As a further preferred embodiment, the present embodiment further includes: step S140, receiving the external device to read the data flag to form a connection request, and implementing data interaction between the external device and the feature server in a state that the connection request is verified.
Any person except the experiencer can scan the data mark in the two-dimensional code form by holding the mobile terminal, so that the condition that the user request is connected with the live broadcast server can be judged, and the live broadcast server pushes data matched with the user request to the mobile terminal to be output when the request is verified to pass. At this time, a stereoscopic live broadcast mode of virtual display can be formed.
As a further preferred embodiment, further comprising: the method is characterized in that a virtual camera is arranged at any preset angle in the virtual reality, the virtual camera is formed by a camera shooting angle, the camera shooting angle can be defined by a user, for example, the user can self-recognize the image picture of the angle and can arrange a virtual camera at the angle, the virtual camera acquires the virtual picture under the angle and outputs the picture through the data output method in the virtual reality to form multi-angle panoramic image presentation.
Example two
The present embodiment further provides a virtual reality-based data output system, which includes,
the first server acquires current image data information of a preset device through a first push communication service and forms a video data stream, or reads the video data stream through a first pull stream service and decodes the video data stream to form video data which is output to the characteristic server;
the second server acquires the current audio data information of the preset device through a second push communication service and forms an audio data stream; or reading the audio data stream through a second pull stream service and decoding to form the audio data to be output to the feature server.
As a further preferred embodiment, the virtual reality-based data output system further comprises a first predetermined data flow label, a second predetermined data flow label, and specifically,
the first pull stream service reads the video data stream through the first preset data stream label, and performs decoding processing on the video data stream to form the video data output;
and the second pull stream service reads the audio data stream through the second preset data stream label and performs decoding processing on the audio data stream to form the audio data output.
As a further preferred embodiment, the virtual reality-based data output system further includes:
and the receiving unit is used for receiving the data mark read by the external equipment to form a connection request and realizing data interaction between the external equipment and the feature server under the condition that the connection request is verified.
The working principle of the virtual reality-based data output system provided by the embodiment of the invention is the same as that of the virtual reality-based data output method, and the details are not repeated here.
EXAMPLE III
The embodiment provides a server which can be used for data output of virtual reality. The server includes:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement a virtual reality-based data output method as proposed in the above embodiments, which specifically includes:
the method comprises the steps that a first server obtains current image data information of a preset device and forms a video data stream to be output through a first push communication service, and a second server obtains current audio data information of the preset device and forms an audio data stream to be output through a second push communication service;
and reading the video data stream and decoding the video data stream to form the video data output through a first pull stream service, and reading the audio data stream and decoding the audio data stream to form the audio data output through a second pull stream service.
The processor and memory may be connected by a bus or other means.
The memory is used as a computer readable storage medium for storing software programs, computer executable programs, and modules, such as program instructions/modules corresponding to a text correction method in an embodiment of the present invention. The processor executes various functional applications and data processing of the terminal by running software programs, instructions and modules stored in the memory, that is, the virtual reality-based data output method described above is realized.
The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating device, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Example four
The present embodiment provides a storage medium on which a computer program is stored, which when executed by a processor implements a virtual reality-based data output method as set forth in the above embodiments.
The storage medium proposed by the present embodiment belongs to the same inventive concept as the virtual reality-based data output method proposed by the above embodiments, and technical details that are not described in detail in the present embodiment can be referred to the above embodiments, and the present embodiment has the same beneficial effects as the above embodiments.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes instructions for enabling a computer device (which may be a personal computer, a terminal, or a network device) to execute the methods according to the embodiments of the present invention.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the operations of the method described above, and may also perform related operations in a virtual reality-based data output method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute a text correction method according to various embodiments of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (9)

1. A virtual reality-based data output method is characterized by comprising the following steps,
the method comprises the steps that a first server obtains current image data information of a preset device and forms a video data stream to be output through a first push communication service, and a second server obtains current audio data information of the preset device and forms an audio data stream to be output through a second push communication service;
and reading the video data stream and decoding the video data stream to form the video data output through a first pull stream service, and reading the audio data stream and decoding the audio data stream to form the audio data output through a second pull stream service.
2. The virtual reality-based data output method according to claim 1, further comprising a first predetermined data stream tag and a second predetermined data stream tag, wherein the first server reads the video data stream and decodes the output through the first pull service, and the second server reads the audio data stream and decodes the output through the second pull service specifically comprises:
the first pull stream service reads the video data stream through the first preset data stream label, and performs decoding processing on the video data stream to form the video data output;
and the second pull stream service reads the audio data stream through the second preset data stream label and performs decoding processing on the audio data stream to form the audio data output.
3. The virtual reality-based data output method according to claim 1, further comprising:
and forming a data mark output after the audio data and the video data are output.
4. A virtual reality-based data output method according to claim 3, further comprising:
and receiving the data mark read by the external equipment to form a connection request, and realizing data interaction between the external equipment and the feature server under the condition that the connection request is verified.
5. A virtual reality-based data output system, comprising,
the first server acquires current image data information of a preset device through a first push communication service and forms a video data stream, or reads the video data stream through a first pull stream service and decodes the video data stream to form video data which is output to the characteristic server;
the second server acquires the current audio data information of the preset device through a second push communication service and forms an audio data stream; or reading the audio data stream through a second pull stream service and decoding to form the audio data to be output to the feature server.
6. A virtual reality based data output system according to claim 5, further comprising a first predetermined data flow label, a second predetermined data flow label, in particular,
the first pull stream service reads the video data stream through the first preset data stream label, and performs decoding processing on the video data stream to form the video data output;
and the second pull stream service reads the audio data stream through the second preset data stream label and performs decoding processing on the audio data stream to form the audio data output.
7. A virtual reality-based data output system according to claim 5, further comprising:
and the receiving unit is used for receiving the data mark read by the external equipment to form a connection request and realizing data interaction between the external equipment and the feature server under the condition that the connection request is verified.
8. An apparatus, characterized in that the apparatus comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a virtual reality based data output method as recited in any one of claims 1-4.
9. A storage medium containing computer-executable instructions characterized in that said computer-executable instructions are embodied in
The computer processor when executed is configured to perform a virtual reality based data output method as claimed in any one of claims 1 to 4.
CN201911397336.5A 2019-12-30 2019-12-30 Data output method and system based on virtual reality Pending CN111147930A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911397336.5A CN111147930A (en) 2019-12-30 2019-12-30 Data output method and system based on virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911397336.5A CN111147930A (en) 2019-12-30 2019-12-30 Data output method and system based on virtual reality

Publications (1)

Publication Number Publication Date
CN111147930A true CN111147930A (en) 2020-05-12

Family

ID=70522021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911397336.5A Pending CN111147930A (en) 2019-12-30 2019-12-30 Data output method and system based on virtual reality

Country Status (1)

Country Link
CN (1) CN111147930A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112887703A (en) * 2021-03-26 2021-06-01 歌尔股份有限公司 Head-mounted display device control method, head-mounted display device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105791291A (en) * 2016-03-02 2016-07-20 腾讯科技(深圳)有限公司 Display control method for network application and real-time display update method and device
CN107172411A (en) * 2017-04-18 2017-09-15 浙江传媒学院 A kind of virtual reality business scenario rendering method under the service environment based on home videos
US20180314486A1 (en) * 2017-04-28 2018-11-01 Microsoft Technology Licensing, Llc Streaming of Augmented/Virtual Reality Spatial Audio/Video
US20190020905A1 (en) * 2017-07-14 2019-01-17 World Emergency Network - Nevada, Ltd. Mobile Phone as a Police Body Camera Over a Cellular Network
CN109644296A (en) * 2016-10-10 2019-04-16 华为技术有限公司 A kind of video stream transmission method, relevant device and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105791291A (en) * 2016-03-02 2016-07-20 腾讯科技(深圳)有限公司 Display control method for network application and real-time display update method and device
CN109644296A (en) * 2016-10-10 2019-04-16 华为技术有限公司 A kind of video stream transmission method, relevant device and system
CN107172411A (en) * 2017-04-18 2017-09-15 浙江传媒学院 A kind of virtual reality business scenario rendering method under the service environment based on home videos
US20180314486A1 (en) * 2017-04-28 2018-11-01 Microsoft Technology Licensing, Llc Streaming of Augmented/Virtual Reality Spatial Audio/Video
US20190020905A1 (en) * 2017-07-14 2019-01-17 World Emergency Network - Nevada, Ltd. Mobile Phone as a Police Body Camera Over a Cellular Network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112887703A (en) * 2021-03-26 2021-06-01 歌尔股份有限公司 Head-mounted display device control method, head-mounted display device, and storage medium

Similar Documents

Publication Publication Date Title
CN109327727B (en) Live stream processing method in WebRTC and stream pushing client
US10068364B2 (en) Method and apparatus for making personalized dynamic emoticon
EP3180911B1 (en) Immersive video
WO2018033138A1 (en) Data processing method, apparatus and electronic device
EP3046331B1 (en) Media control method and system based on cloud desktop
CN107741886B (en) Multi-person interaction method based on augmented reality technology
US9210372B2 (en) Communication method and device for video simulation image
CN111179437B (en) Cloud VR connectionless streaming system and connection method
CN111246232A (en) Live broadcast interaction method and device, electronic equipment and storage medium
US11451858B2 (en) Method and system of processing information flow and method of displaying comment information
CN106303354B (en) Face special effect recommendation method and electronic equipment
US10929460B2 (en) Method and apparatus for storing resource and electronic device
CN108960889B (en) Method and device for controlling voice speaking room progress in virtual three-dimensional space of house
WO2019114328A1 (en) Augmented reality-based video processing method and device thereof
WO2015062205A1 (en) Teaching method and system
CN111433743A (en) APP remote control method and related equipment
CN109242940B (en) Method and device for generating three-dimensional dynamic image
CN111464828A (en) Virtual special effect display method, device, terminal and storage medium
JP6379107B2 (en) Information processing apparatus, control method therefor, and program
TWI637772B (en) System and method for delivering media over network
CN113870133A (en) Multimedia display and matching method, device, equipment and medium
US11223662B2 (en) Method, system, and non-transitory computer readable record medium for enhancing video quality of video call
CN113965773A (en) Live broadcast display method and device, storage medium and electronic equipment
CN112492324A (en) Data processing method and system
US20170301123A1 (en) Method and apparatus for realizing boot animation of virtual reality system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200512