CN116567321A - Data processing method, device, equipment and medium - Google Patents

Data processing method, device, equipment and medium Download PDF

Info

Publication number
CN116567321A
CN116567321A CN202310601530.0A CN202310601530A CN116567321A CN 116567321 A CN116567321 A CN 116567321A CN 202310601530 A CN202310601530 A CN 202310601530A CN 116567321 A CN116567321 A CN 116567321A
Authority
CN
China
Prior art keywords
data
intermediate code
decoding
media data
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310601530.0A
Other languages
Chinese (zh)
Inventor
党玉涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202310601530.0A priority Critical patent/CN116567321A/en
Publication of CN116567321A publication Critical patent/CN116567321A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42607Internal components of the client ; Characteristics thereof for processing the incoming bitstream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/65Network streaming protocols, e.g. real-time transport protocol [RTP] or real-time control protocol [RTCP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/6437Real-time Transport Protocol [RTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • H04N21/8173End-user applications, e.g. Web browser, game
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the application provides a data processing method, a device, equipment and a medium, wherein the method specifically comprises the following steps: acquiring target media data; decoding video data in the target media data by using a hard decoding technology or a first network intermediate code; the first network intermediate code is an intermediate code compiled according to a first preset language code with decoding capability of preset video coding characteristics, and can run in a browser environment; decoding audio data in the target media data by using a second network intermediate code; the second network intermediate code is an intermediate code compiled according to a second preset language code having decoding capabilities of preset audio coding features, and the second network intermediate code can be run in a browser environment. The embodiment of the application can support more audio and video coding and decoding types and coding characteristics.

Description

Data processing method, device, equipment and medium
Technical Field
Embodiments of the present application relate to the field of communications technologies, and in particular, to a data processing method, apparatus, device, and medium.
Background
WebRTC (Web multimedia communication, web Real-Time Communications) is a Real-time communication technology that allows Web applications or sites to establish Peer-to-Peer (Peer-to-Peer) connections between browsers without intermediation, enabling the transmission of video data, audio streams, or any other data.
In application scenes such as live broadcasting, cloud rendering and the like, a client can pull media data such as video data and/or audio streams from a server by utilizing the WebRTC technology; and the WebRTC technology is utilized to decode and render the media data so as to realize the playing of the media data.
In practical applications, webRTC technology has certain limitations. For example, coding features such as an audio coding type of AAC (advanced audio coding ) and B-frame (bi-predictive frame, B-frame) are widely used in live scenes, but WebRTC technology does not support coding features such as an audio coding type of AAC format and B-frame. As another example, HEVC (high efficiency video coding ) format is commonly used in cloud rendering scenes, whereas WebRTC technology does not support HEVC format.
Disclosure of Invention
The embodiment of the application provides a data processing method which can support more audio and video coding and decoding types and coding characteristics.
Correspondingly, the embodiment of the application also provides a data processing device, electronic equipment and a storage medium, which are used for realizing the realization and application of the method.
To solve the above problems, an embodiment of the present application discloses a data processing method, including:
acquiring target media data;
decoding video data in the target media data by using a hard decoding technology or a first network intermediate code; the hardware is used for decoding according to the interface of the hardware by the hard decoding technology; the first network intermediate code is an intermediate code compiled according to a first preset language code with decoding capability of preset video coding characteristics, and the first network intermediate code can run in a browser environment;
decoding audio data in the target media data by using a second network intermediate code; the second network intermediate code is an intermediate code compiled according to a second preset language code with decoding capability of preset audio coding characteristics, and the second network intermediate code can run in a browser environment.
To solve the above problems, an embodiment of the present application discloses a data processing apparatus, including:
The acquisition module is used for acquiring target media data;
the video decoding module is used for decoding video data in the target media data by utilizing a hard decoding technology or a first network intermediate code; the hardware is used for decoding according to the interface of the hardware by the hard decoding technology; the first network intermediate code is an intermediate code compiled according to a first preset language code with decoding capability of preset video coding characteristics, and the first network intermediate code can run in a browser environment;
an audio decoding module, configured to decode audio data in the target media data using a second network intermediate code; the second network intermediate code is an intermediate code compiled according to a second preset language code with decoding capability of preset audio coding characteristics, and the second network intermediate code can run in a browser environment.
Optionally, the acquiring module includes:
the first acquisition module is used for acquiring media data from the server according to the data channel; the data channel is a communication link based on a webpage multimedia communication technology, and the mode of the data channel is an unreliable transmission mode;
And the determining module is used for determining target media data according to the media data acquired from the server.
Optionally, the first acquisition module includes:
the communication protocol determining module is used for determining a communication protocol corresponding to the received data;
and the second acquisition module is used for taking the received data corresponding to the first communication protocol as media data when the communication protocol is the first communication protocol.
Optionally, the first communication protocol is a UDP protocol.
Optionally, the apparatus further comprises:
the custom data acquisition module is used for taking the received data corresponding to the second communication protocol as custom data when the communication protocol is the second communication protocol.
Optionally, the audio decoding module includes:
an initialization module for initializing a plurality of audio decoding threads using the main thread;
the task distribution module is used for transmitting an audio decoding task corresponding to the audio data in the target media data to an audio decoding thread in an idle state by utilizing a main thread, so that the audio decoding thread in the idle state decodes the audio decoding task by utilizing a second network intermediate code; wherein an instance corresponding to the second network intermediate code corresponds to an audio decoding task.
Optionally, the determining module includes:
the service quality processing module is used for processing the service quality of the media data acquired from the server by utilizing the third network intermediate code so as to obtain target media data; the third network intermediate code is an intermediate code compiled according to a third preset language code with processing capability of service quality, and the third network intermediate code can run in a browser environment.
In order to solve the above problems, an embodiment of the present application discloses an electronic device, including: a processor; and a memory having executable code stored thereon that, when executed, causes the processor to perform the method as in any of the above embodiments.
To address the above issues, embodiments of the present application disclose one or more machine readable media having executable code stored thereon that, when executed, cause a processor to perform the method of any of the above embodiments.
Embodiments of the present application include the following advantages:
the embodiment of the application decodes the video data in the target media data by using a hard decoding technology or a first network intermediate code, and decodes the audio data in the target media data by using a second network intermediate code. Because the hard decoding technology decodes by utilizing hardware according to an interface of the hardware, the embodiment of the application can overcome the problem that the video coding format supported by the WebRTC technology is limited, for example, the video coding type of the HEVC format can be supported. Because the first network intermediate code may be an intermediate code compiled according to a first preset language code having a decoding capability of a preset video coding feature, the embodiment of the present application may support a preset video coding feature that is not supported by WebRTC technology such as HEVC format. Therefore, the application range of the video coding type can be increased.
The embodiment of the application decodes the audio data in the target media data by using the second network intermediate code. The second network intermediate code is an intermediate code compiled according to a second preset language code with the decoding capability of the preset audio coding feature, so that the embodiment of the application can support the preset audio coding feature which is not supported by WebRTC technology such as AAC format. Therefore, the application range of the audio coding type can be increased.
In summary, the embodiment of the application takes over the audio/video decoding part responsible for WebRTC technology by using the hard decoding technology, the first network intermediate code and the second network intermediate code, and can expand and support more audio/video coding types and coding characteristics.
Drawings
FIG. 1 is a schematic diagram of a data processing system 100 according to one embodiment of the present application;
FIG. 2 is a schematic diagram of a client in the related art;
FIG. 3 is a flow chart of steps of a data processing method of one embodiment of the present application;
FIG. 4 is a flow diagram of a data processing apparatus according to one embodiment of the present application;
FIG. 5 is a schematic diagram of a data processing apparatus according to one embodiment of the present application;
FIG. 6 is a schematic diagram of a data processing system according to one embodiment of the present application;
fig. 7 is a schematic structural diagram of an exemplary apparatus provided in one embodiment of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will become more readily apparent, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings.
The data processing method of the embodiment of the application can be used for transmitting audio/video data. The data processing method can be applied to application scenes such as live broadcasting, cloud rendering, teleconferencing, video chat, network telephone and the like. Wherein, live scenes may include: RTS (Low latency live Streaming) scenes.
The cloud rendering renders a 3D (three-dimensional) program in a remote server, a user clicks a 'cloud rendering' button through a client and accesses resources by means of a high-speed internet, a corresponding cloud rendering instruction is sent out from the client, the server executes a corresponding rendering task according to the cloud rendering instruction, and a rendering result picture is transmitted back to the client for display. The cloud rendering instruction may carry a scene file and an auxiliary file, where the scene file is as follows: the user utilizes 3D model making software to build models, arrange lights, create materials and the like; auxiliary files such as: reference files, map files, proxy files, photon files, optical area networks, action caches, hair caches, optical caches, fluid caches, particle caches, cache files generated by rendering software plug-ins, and the like. The cloud rendering instructions may further carry rendering parameters, for example: the version of the rendering software used, the file name of the rendering output, the resolution of the picture of the rendering output, the picture format of the rendering output, the selection frame settings (including the rendering start frame, the rendering end frame, the sequence frame, the detailed frame, the single frame can be selected for rendering), the output image size (width, height), the camera, etc. The rendering result picture transmitted by the server may be used as the transmitted audio/video data.
The audio/video data transmitted in the embodiments of the present application may be a media stream (media stream), which may be understood as a set of ordered data sequences of bytes having a start point and an end point.
Embodiments of the present application may relate to the use of user data, and in practical applications, user-specific personal data may be used in the schemes described herein within the scope allowed by applicable laws and regulations under conditions that meet applicable legal and regulatory requirements of the country where the application is located (e.g., the user explicitly agrees, practical notification to the user, etc.).
With reference to FIG. 1, a schematic diagram of a data processing system 100 is shown, according to one embodiment of the present application. As shown in fig. 1, the system 100 specifically includes: play client 110, server 120, and anchor client 130. Wherein the server 120 is communicatively connected to the play client 110 and the anchor client 130, respectively. The anchor client 130 sends the collected data to the server 120, the process may be a push process, and the anchor client 130 may continuously send real-time media data to the server 120 during the audio/video collection process; the playback client 110 obtains media data stored by the server 120 from the server 120, which may be a pull stream process, and the media data may be a portion of the data being collected by the anchor client 130 that has been sent to the server 120. Based on this, media data collected in real time in the anchor client 130 is transmitted to the play client 110 through the server 120, and rendering and playing are performed in the play client 110, so as to realize live broadcast.
Web instant messaging (WebRTC) may be deployed in each of the playing client 110, the server 120, and the anchor client 130, so as to support the Web browser to transmit Real-time audio/video data, and implement the RTS.
The play client 110 and the anchor client 130 may operate in any electronic device, which may include, but is not limited to: PC (personal computer ), mobile phone (mobile phone), tablet (pad), smart wearable device, computer with wireless transceiving function, VR (virtual reality) terminal device, AR (augmented reality ) terminal device, wireless terminal in industrial control (industrial control), wireless terminal in unmanned driving (self driving), wireless terminal in teleoperation (remote medical surgery), wireless terminal in smart grid (smart grid), wireless terminal in transportation security (transportation safety), wireless terminal in smart city (smart city), wireless terminal in smart home (smart home), and the like.
The server 120 may be implemented as a general server, a server cluster, or a cloud server, a server cluster. The server 120 may be configured to implement some or all of signaling services, media services, NAT (network address translation ) session traversal application (STUN, session Traversal Utilities for NAT) services, and NAT's relay traversal (TURN, traversal Using Relays around NAT) services. In some embodiments, the services may be implemented as separate servers, for example, may include some or all of signaling server 121, media server 122, and STUN/TURN server 123 shown in fig. 1, which may typically implement STUN services and TURN services; in other embodiments, some or all of the services described above may be integrated in the same server in the form of service units.
In the embodiment of the present application, the signaling service may be used to implement signaling interaction between the playing client 110 and the server 120; the media service may be used to receive media data sent by the anchor client 130 and provide the media data to the play client 110; the STUN service is used to detect whether NAT exists around the playback client 110; TURN services are used to penetrate NAT around the playback client 110, enabling the establishment of media data channels between the playback client 110 and the server 120.
The related art streaming process generally establishes a media data channel between the playback client 110 and the server 120, for example, establishes a media data channel of WebRTC; in this way, the playback client 110 may pull media data, such as video data or audio streams, from the server 120 using WebRTC technology.
Referring to fig. 2, a schematic diagram of a client in the related art is shown, where a client 201 may be the aforementioned play client 110. The client 201 may utilize an API (application programming interface ) related to WebRTC technology provided by a browser to pull media data such as video data or audio stream from the server 202, and decode and render the media data to realize playing of the media data.
In fig. 2, the client 201 may specifically include: a point-to-point connection module 211 and a video tag module 212. The point-to-point connection module 211 may establish a point-to-point connection with the server 202, receive media data based on a corresponding media channel, and provide the media data to a video (video) tag of the video tag module 212. The video tag module 212 may embed video elements corresponding to the media data in an HTML (HyperText markup Language) page to play the media data.
In practice, the video tag of the video tag module 212 is based on WebRTC technology. However, the video encoding formats supported by WebRTC technology generally include: VP8 (On 2 TrueMotion VP 8), AV1 (AOMedia Video 1), etc., does not contain HEVC format.
Aiming at the technical problem that the audio and video coding format supported by the WebRTC technology in the related technology is limited, the embodiment of the application provides a data processing method, which specifically comprises the following steps: acquiring target media data; decoding video data in the target media data by using a hard decoding technique or a first network intermediate code; the hardware is used for decoding according to the interface of the hardware; the first network intermediate code may be an intermediate code compiled from a first preset language code having a decoding capability of a preset video coding feature, the first network intermediate code being capable of running in a browser environment; decoding audio data in the target media data by using a second network intermediate code; the second network intermediate code is an intermediate code compiled according to a second preset language code having a decoding capability of a preset audio coding feature, and the second network intermediate code can be run in a browser environment. Examples of intermediate code may include: WASM (web assembly), etc.
The embodiment of the application decodes the video data in the target media data by using a hard decoding technology or a first network intermediate code, and decodes the audio data in the target media data by using a second network intermediate code. Because the hard decoding technology decodes by utilizing hardware according to an interface of the hardware, the embodiment of the application can overcome the problem that the video coding format supported by the WebRTC technology is limited, for example, the video coding type of the HEVC format can be supported. Because the first network intermediate code may be an intermediate code compiled according to a first preset language code having a decoding capability of a preset video coding feature, the embodiment of the present application may support a preset video coding feature that is not supported by WebRTC technology such as HEVC format. Therefore, the application range of the video coding type can be increased.
The embodiment of the application decodes the audio data in the target media data by using the second network intermediate code. The second network intermediate code is an intermediate code compiled according to a second preset language code with the decoding capability of the preset audio coding feature, so that the embodiment of the application can support the preset audio coding feature which is not supported by WebRTC technology such as AAC format. Therefore, the application range of the audio coding type can be increased.
In summary, the embodiment of the application takes over the audio/video decoding part responsible for WebRTC technology by using the hard decoding technology, the first network intermediate code and the second network intermediate code, and can expand and support more audio/video coding types and coding characteristics.
Method embodiment
Referring to fig. 3, a flowchart illustrating steps of a data processing method according to an embodiment of the present application may specifically include the following steps:
step 301, obtaining target media data;
step 302, decoding video data in the target media data by using a hard decoding technology or a first network intermediate code; the hardware is used for decoding according to the interface of the hardware; the first network intermediate code may be an intermediate code compiled from a first preset language code having a decoding capability of a preset video coding feature, the first network intermediate code being capable of running in a browser environment;
step 303, decoding the audio data in the target media data by using a second network intermediate code; the second network intermediate code may be an intermediate code compiled from a second preset language code having a decoding capability of a preset audio coding feature, the second network intermediate code being capable of being run in a browser environment.
At least one step included in the method embodiment shown in fig. 3 may be performed by a client, which may be the aforementioned playback client 110. Of course, the embodiment of the present application is not limited to the specific implementation of the method shown in fig. 3.
In step 301, embodiments of the present application do not impose limitations on the communication protocol between the client and the server. For example, the communication protocol between the client and the server may specifically include: TCP (transmission control protocol ) or UDP (user datagram protocol, user Datagram Protocol), etc.
For example, in the case of TCP, the connection between the client and the server may be a websocket connection. As another example, where UDP is employed, the connection between the client and the server may be a point-to-point connection of WebRTC technology, and a data channel (datachannel) of WebRTC technology may be employed to receive media data from the server.
In a specific implementation, media data can be acquired from a server according to a data channel, and target media data can be determined according to the media data acquired from the server; the data channel may be a communication link based on WebRTC (web multimedia communication) technology, and the mode of the data channel may be an unreliable transmission mode.
The channels of WebRTC technology may include: a data channel or a media channel. In the related art, a media channel is used for transmitting media data such as audio and video data, and a data channel is used for transmitting data other than audio and video data, such as instruction data. In the related art, media data is default for transmitting media data, and media data received via a media channel enters the video tag module 212 by default, that is, the media data received via the media channel is decoded by default using WebRTC technology.
In order to break through the limitation of decoding media data received by a media channel by using the WebRTC technology by default, the embodiment of the application adopts a data channel of the WebRTC technology to receive the media data from a server.
The modes of the data channel of WebRTC technology may include: reliable transmission mode or unreliable transmission mode, etc. The reliable transmission mode is a default mode of the data channel, and the data can be ensured to be successfully transmitted to the peer end by using a retransmission mechanism without limiting the retransmission times; in other words, the media data is retransmitted without limiting the number of retransmissions until the media data is successfully transmitted to the peer. The unreliable transmission mode uses a retransmission mechanism defining the number of retransmissions, in other words, retransmits the media data in case of defining the number of retransmissions; whether or not the media data is successfully transmitted to the peer, the retransmission of the media data may be ended if the number of retransmissions reaches an upper limit.
In the embodiment of the present application, the data channel is in an unreliable transmission mode, so that the retransmission times can be reduced to a certain extent, and therefore, the transmission efficiency of the media data can be improved. In the embodiment of the application, in the process of receiving the media data from the server by using the data channel of WebRTC technology, the media channel may be in an open state.
In a specific implementation, the process of obtaining media data from the server may specifically include: determining a communication protocol corresponding to the received data; and when the communication protocol is a first communication protocol, taking the received data corresponding to the first communication protocol as media data. The received data may refer to data received via a data channel.
In practical applications, the received data received via the data channel may include not only: media data, may further include: signaling data, etc. The embodiment of the application can distinguish different received data according to the communication protocol corresponding to the received data. In particular, the first communication protocol may correspond to media data and the second communication protocol may correspond to custom data.
The first communication protocol can be determined by a person skilled in the art according to the actual application requirements. For example, the first communication protocol may be a UDP protocol, so as to increase the transmission speed of the media data. Examples of the application layer protocol corresponding to the first communication protocol may include: rtp (Real-time transport protocol ), or RTCP (Real-time transport control protocol, real-time Control Protocol), etc.
In the embodiment of the present application, when the communication protocol is a second communication protocol, the received data corresponding to the second communication protocol may be used as the custom data.
The person skilled in the art can determine the second communication protocol according to the actual application requirements. For example, the second communication protocol may be a TCP protocol, or an SCTP (stream control transmission protocol ) protocol, etc., so as to improve the transmission reliability of the custom data.
In one implementation manner of the present application, the process of determining the target media data according to the media data acquired from the server may include: and taking the media data acquired from the server as target media data.
In another implementation manner of the present application, the process of determining the target media data according to the media data acquired from the server may include: the media data acquired from the server is processed for quality of service (QOS, quality of service) to obtain target media data.
In a specific implementation, the third network intermediate code may be used to perform quality of service processing on the media data acquired from the server to obtain the target media data.
The processing of the service quality can be used for processing the data transmission problems of the media data such as packet loss, jitter and the like, and the data transmission quality of the media data can be improved. For example, in the case where the communication protocol between the client and the server is TCP or an unreliable transmission mode of a data channel using WebRTC technology, a problem of data transmission of media data such as packet loss, jitter, or the like may occur.
The quality of service processing employed by embodiments of the present application may include, but is not limited to: IP (internet protocol ) priority handling, rate adjustment, packet loss retransmission, forward error correction, backward error correction, etc. The processing of the service quality can control the data packet congestion to a certain extent, reduce errors generated in transmission and improve the transmission quality of media data.
In a specific implementation, JS (JavaScript) code or third network intermediate code may be used to process the quality of service of the media data obtained from the server to obtain the target media data. In practical application, the third preset language code with the processing capability of service quality can be written by using assembly languages such as C, C ++, and then compiled into an intermediate format, so that a third network intermediate code with very high loading and executing speeds is generated. The third network intermediate code may be an intermediate code compiled from a third preset language code having a processing capability of quality of service, the third network intermediate code being capable of running in a browser environment. The JS code has the characteristic of interpretation and execution, so that the execution speed is limited. And the third network intermediate code may be an intermediate code compiled from a third predetermined language code. The third preset language code may be a code corresponding to an assembly language such as C, C ++.
In step 302, video data in the target media data may be decoded using hard decoding techniques; the hardware is used for decoding according to the interface of the hardware; therefore, the embodiment of the application can overcome the problem that the audio coding format supported by the WebRTC technology is limited, for example, the video coding type of HEVC format can be supported. Examples of interfaces for hardware may include: webcode (web codec+decoder), which may be an API provided by a browser. In the process of decoding the video data in the target media data by using webcode, the hardware of the electronic device can be called, so that the decoding performance of the video data can be improved.
The embodiment of the application can decode the video data in the target media data by using the first network intermediate code. The first network intermediate code may be an intermediate code compiled from a first preset language code having a decoding capability of a preset video coding feature, the first network intermediate code being capable of running in a browser environment. The preset video coding features may be determined by those skilled in the art according to actual application requirements, and for example, the preset video coding features may include: HEVC format and B frames, etc. The first preset language code may be a code corresponding to an assembly language such as C, C ++.
The intermediate code may be a bytecode, which corresponds to an intermediate format, may be an encoded format running in a browser, which may interact with JavaScript.
In practical application, according to the preset video coding features, a first preset language code can be written by using an assembly language, and then the first preset language code is compiled into a first network intermediate code for a browser to call.
According to the video decoding method and device, video decoding can be carried out in a multithreading mode through the first network intermediate code at the browser side, and video decoding speed can be improved.
Accordingly, the decoding the video data in the target media data by using the first network intermediate code includes: the main thread initializes a plurality of video decoding threads; the main thread sends a video decoding task corresponding to video data in the target media data to a video decoding thread in an idle state, so that the video decoding thread in the idle state decodes the video decoding task by using a first network intermediate code; wherein an instance corresponding to the first network intermediate code corresponds to a video decoding task.
The video decoding thread may be a worker of a browser. The worker and the main thread are not interfered with each other, and after the code execution of the worker is completed, the execution result is returned to the main thread. The main thread may determine a plurality of video decoding tasks and send the video decoding tasks to the video decoding threads in an idle state. In practical applications, a video decoding task may correspond to a packet of video data.
In the embodiment of the present application, an instance corresponding to the first network intermediate code may correspond to a video decoding task. Because the thread can not be built in one example corresponding to the first network intermediate code, the cross-domain isolation requirement can be met, and the safety of video decoding can be further improved. Therefore, the embodiment of the application can be applied to application scenes such as ToB (business oriented) with cross-domain isolation requirements.
The video decoding result in the embodiment of the present application may be image data, and the format of the image data may be YUV (luminance and chrominance density, luminance Chrominance Chroma), or the like. The embodiment of the application can draw the image data on the screen by utilizing the image rendering API provided by the browser.
In step 303, the audio data in the target media data may be decoded using a second network intermediate code. The second network intermediate code may be an intermediate code compiled from a second predetermined language code having a decoding capability of a predetermined audio encoding feature.
The preset audio coding features may be determined by those skilled in the art according to actual application requirements, and for example, the preset audio coding features may include: AAC format, etc. The second preset language code may be a code corresponding to an assembly language such as C, C ++.
In practical application, according to the preset audio coding feature, a second preset language code can be written by using assembly language, and then compiled into a second network intermediate code for the browser to call.
Because the processing of the service quality has a certain degree of dependence on the audio decoding result, but the audio decoding realized by the embodiment of the application based on the second network intermediate code can improve the accuracy of the audio decoding result, the processing effect of the service quality can be improved.
According to the embodiment of the application, the audio decoding can be performed in a multithreading mode at the browser end through the second network intermediate code, and the audio decoding speed can be improved.
Accordingly, the decoding the audio data in the target media data by using the second network intermediate code includes: the main thread initializes a plurality of audio decoding threads; the main thread sends an audio decoding task corresponding to the audio data in the target media data to an audio decoding thread in an idle state, so that the audio decoding thread in the idle state decodes the audio decoding task by using a second network intermediate code; wherein an instance corresponding to the second network intermediate code corresponds to an audio decoding task.
The main thread may determine a plurality of audio decoding tasks and send the audio decoding tasks to the audio decoding thread in an idle state. In practical applications, an audio decoding task may correspond to a packet of audio data.
In the embodiment of the present application, an instance corresponding to the second network intermediate code may correspond to an audio decoding task. Because the second network intermediate code corresponds to an example, no thread can be built in, the cross-domain isolation requirement can be met, and the safety of audio decoding can be improved.
The audio decoding result of the embodiment of the present application may be audio data, and the format of the audio data may be PCM (pulse code modulation ), or the like. The embodiment of the application can play the audio data out by utilizing the audio rendering API provided by the browser.
The embodiment of the application can be applied to application scenes such as live broadcasting, cloud rendering, teleconferencing, video chat, network telephone and the like, and is used for a streaming process in the application scenes. The pulling and flowing process specifically comprises the following steps: the client acquires media data stored by the server from the server.
In summary, the data processing method of the embodiment of the present application decodes video data in target media data by using a hard decoding technique or a first network intermediate code, and decodes audio data in the target media data by using a second network intermediate code. Because the hard decoding technology decodes by utilizing hardware according to an interface of the hardware, the embodiment of the application can overcome the problem that the video coding format supported by the WebRTC technology is limited, for example, the video coding type of the HEVC format can be supported. Because the first network intermediate code may be an intermediate code compiled according to a first preset language code having a decoding capability of a preset video coding feature, the embodiment of the present application may support a preset video coding feature that is not supported by WebRTC technology such as HEVC format. Therefore, the application range of the video coding type can be increased.
The embodiment of the application decodes the audio data in the target media data by using the second network intermediate code. The second network intermediate code is an intermediate code compiled according to a second preset language code with the decoding capability of the preset audio coding feature, so that the embodiment of the application can support the preset audio coding feature which is not supported by WebRTC technology such as AAC format. Therefore, the application range of the audio coding type can be increased.
In summary, the embodiment of the application takes over the audio/video decoding part responsible for WebRTC technology by using the hard decoding technology, the first network intermediate code and the second network intermediate code, and can expand and support more audio/video coding types and coding characteristics.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments and that the acts referred to are not necessarily required by the embodiments of the present application.
On the basis of the foregoing embodiments, the embodiments of the present application further provide a data processing apparatus, and referring to fig. 4, a schematic structural diagram of the data processing apparatus according to one embodiment of the present application is shown, which may specifically include: an acquisition module 401, a video decoding module 402 and an audio decoding module 403.
Wherein, the obtaining module 401 is configured to obtain target media data;
a video decoding module 402, configured to decode video data in the target media data using a hard decoding technique or a first network intermediate code; the hardware is used for decoding according to the interface of the hardware by the hard decoding technology; the first network intermediate code is an intermediate code compiled according to a first preset language code with decoding capability of preset video coding characteristics, and the first network intermediate code can run in a browser environment;
an audio decoding module 403, configured to decode audio data in the target media data using a second network intermediate code; the second network intermediate code is an intermediate code compiled according to a second preset language code with decoding capability of preset audio coding characteristics, and the second network intermediate code can run in a browser environment.
Optionally, the obtaining module 401 may specifically include:
the first acquisition module is used for acquiring media data from the server according to the data channel; the data channel is a communication link based on a webpage multimedia communication technology, and the mode of the data channel is an unreliable transmission mode;
and the determining module is used for determining target media data according to the media data acquired from the server.
Optionally, the first obtaining module may specifically include:
the communication protocol determining module is used for determining a communication protocol corresponding to the received data;
and the second acquisition module is used for taking the received data corresponding to the first communication protocol as media data when the communication protocol is the first communication protocol.
Alternatively, the first communication protocol may be a UDP protocol.
Optionally, the apparatus may further include:
the custom data acquisition module is used for taking the received data corresponding to the second communication protocol as custom data when the communication protocol is the second communication protocol.
Optionally, the audio decoding module 403 may include:
an initialization module for initializing a plurality of audio decoding threads using the main thread;
The task distribution module is used for transmitting an audio decoding task corresponding to the audio data in the target media data to an audio decoding thread in an idle state by utilizing a main thread, so that the audio decoding thread in the idle state decodes the audio decoding task by utilizing a second network intermediate code; wherein an instance corresponding to the second network intermediate code corresponds to an audio decoding task.
Optionally, the determining module may include:
the service quality processing module is used for processing the service quality of the media data acquired from the server by utilizing the third network intermediate code so as to obtain target media data; the third network intermediate code is an intermediate code compiled according to a third preset language code with processing capability of service quality, and the third network intermediate code can run in a browser environment.
In summary, the data processing apparatus of the embodiments of the present application decodes video data in target media data using a hard decoding technique or a first network intermediate code, and decodes audio data in the target media data using a second network intermediate code. Because the hard decoding technology decodes by utilizing hardware according to an interface of the hardware, the embodiment of the application can overcome the problem that the video coding format supported by the WebRTC technology is limited, for example, the video coding type of the HEVC format can be supported. Because the first network intermediate code may be an intermediate code compiled according to a first preset language code having a decoding capability of a preset video coding feature, the embodiment of the present application may support a preset video coding feature that is not supported by WebRTC technology such as HEVC format. Therefore, the application range of the video coding type can be increased.
The embodiment of the application decodes the audio data in the target media data by using the second network intermediate code. The second network intermediate code is an intermediate code compiled according to a second preset language code with the decoding capability of the preset audio coding feature, so that the embodiment of the application can support the preset audio coding feature which is not supported by WebRTC technology such as AAC format. Therefore, the application range of the audio coding type can be increased.
In summary, the embodiment of the application takes over the audio/video decoding part responsible for WebRTC technology by using the hard decoding technology, the first network intermediate code and the second network intermediate code, and can expand and support more audio/video coding types and coding characteristics.
Referring to fig. 5, a schematic structural diagram of a data processing apparatus according to an embodiment of the present application is shown, which may specifically include: a point-to-point connection module 501, a parsing distribution module 502, a custom data processing module 503, a quality of service processing module 504, a video decoding module 505, and an audio decoding module 506.
The point-to-point connection module 501 is configured to establish a point-to-point connection with a server, and obtain received data based on a corresponding data channel.
The parsing and distributing module 502 is configured to determine a communication protocol corresponding to the received data, and distribute the received data according to the communication protocol. Specifically, when the communication protocol is a first communication protocol, the received data corresponding to the first communication protocol is distributed to the quality of service processing module 504 as media data; or, if the communication protocol is the second communication protocol, the received data corresponding to the first communication protocol is distributed to the custom data processing module 503 as custom data.
The custom data processing module 503 is configured to process custom data such as signaling data.
The quality of service processing module 504 is configured to perform quality of service processing on media data acquired from a server to obtain target media data. Specifically, the quality of service processing can be performed on the media data using the JS code or the third network intermediate code.
The video decoding module 505 is configured to decode video data in the target media data by using a hard decoding technique or a first network intermediate code, so as to obtain image data in YUV format.
The audio decoding module 506 is configured to decode the audio data in the target media data by using the second network intermediate code to obtain the audio data in PCM format.
The video decoding result in the embodiment of the application may be image data, and the format of the image data may be YUV or the like. The embodiment of the application can draw the image data on the screen by utilizing the image rendering API provided by the browser.
The audio decoding result in the embodiment of the present application may be audio data, and the format of the audio data may be PCM or the like. The embodiment of the application can play the audio data out by utilizing the audio rendering API provided by the browser.
With reference now to FIG. 6, a diagram illustrating the architecture of a data processing system is depicted in accordance with one embodiment of the present application, the diagram may include: a server 601 and a client 602; the client 602 may specifically include: a point-to-point connection module 621, a parse and distribute module 622, a custom data processing module 623, a quality of service processing module 624, a video decoding module 625, and an audio decoding module 626.
The point-to-point connection module 621 is configured to establish a point-to-point connection with the server, and obtain the received data based on the corresponding data channel.
The parsing and distributing module 622 is configured to determine a communication protocol corresponding to the received data, and distribute the received data according to the communication protocol. Specifically, when the communication protocol is a first communication protocol, the received data corresponding to the first communication protocol is distributed as media data to the quality of service processing module 624; or, in the case that the communication protocol is the second communication protocol, the received data corresponding to the first communication protocol is distributed to the custom data processing module 623 as custom data.
The custom data processing module 623 is configured to process custom data such as signaling data.
A quality of service processing module 624, configured to perform quality of service processing on the media data acquired from the server by using the third network intermediate code before the video decoding module 625 decodes the video data in the media data and the audio decoding module 626 decodes the audio data in the media data, so as to obtain target media data; the third network intermediate code is an intermediate code compiled according to a third preset language code with processing capability of service quality, and the third network intermediate code can run in a browser environment. Specifically, the quality of service processing can be performed on the media data using the JS code or the third network intermediate code.
The video decoding module 625 is configured to decode video data in the target media data by using a hard decoding technique or a first network intermediate code, so as to obtain image data in YUV format.
An audio decoding module 626 for decoding the audio data in the target media data using the second network intermediate code to obtain PCM formatted audio data.
The embodiment of the application also provides a non-volatile readable storage medium, where one or more modules (programs) are stored, where the one or more modules are applied to a device, and the device may be caused to execute instructions (instractions) of each method step in the embodiment of the application.
Embodiments of the present application provide one or more machine-readable media having instructions stored thereon that, when executed by one or more processors, cause an electronic device to perform a method as described in one or more of the above embodiments. In this embodiment of the present application, the electronic device includes a server, a terminal device, and other devices.
Embodiments of the present disclosure may be implemented as an apparatus for performing a desired configuration using any suitable hardware, firmware, software, or any combination thereof, which may include a server (cluster), terminal, or the like. Fig. 7 schematically illustrates an example apparatus 1700 that may be used to implement various embodiments described herein.
For one embodiment, FIG. 7 illustrates an example apparatus 1700 having one or more processors 1702, a control module (chipset) 1704 coupled to at least one of the processor(s) 1702, a memory 1706 coupled to the control module 1704, a non-volatile memory (NVM)/storage device 1708 coupled to the control module 1704, one or more input/output devices 1710 coupled to the control module 1704, and a network interface 1712 coupled to the control module 1704.
The processor 1702 may include one or more single-core or multi-core processors, and the processor 1702 may include any combination of general-purpose or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some embodiments, the apparatus 1700 can be used as a server, a terminal, or the like in the embodiments of the present application.
In some embodiments, the apparatus 1700 may include one or more computer-readable media (e.g., memory 1706 or NVM/storage 1708) having instructions 1714 and one or more processors 1702 combined with the one or more computer-readable media configured to execute the instructions 1714 to implement the modules to perform the actions described in this disclosure.
For one embodiment, the control module 1704 may include any suitable interface controller to provide any suitable interface to at least one of the processor(s) 1702 and/or any suitable device or component in communication with the control module 1704.
The control module 1704 may include a memory controller module to provide an interface to the memory 1706. The memory controller modules may be hardware modules, software modules, and/or firmware modules.
Memory 1706 may be used to load and store data and/or instructions 1714 for device 1700, for example. For one embodiment, memory 1706 may include any suitable volatile memory, such as, for example, a suitable DRAM. In some embodiments, memory 1706 may comprise double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, the control module 1704 may include one or more input/output controllers to provide interfaces to the NVM/storage 1708 and the input/output device(s) 1710.
For example, NVM/storage 1708 may be used to store data and/or instructions 1714. NVM/storage 1708 may include any suitable nonvolatile memory (e.g., flash memory) and/or may include any suitable nonvolatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 1708 may include a storage resource as part of the device on which apparatus 1700 is installed or may be accessible by the device without necessarily being part of the device. For example, NVM/storage 1708 may be accessed over a network via input/output device(s) 1710.
The input/output device(s) 1710 may provide an interface for the apparatus 1700 to communicate with any other suitable device, and the input/output device 1710 may include a communication component, an audio component, a sensor component, and the like. The network interface 1712 may provide the device 1700 with an interface to communicate over one or more networks, and the device 1700 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols, such as accessing a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, 5G, etc., or a combination thereof.
For one embodiment, at least one of the processor(s) 1702 may be packaged together with logic of one or more controllers (e.g., memory controller modules) of the control module 1704. For one embodiment, at least one of the processor(s) 1702 may be packaged together with logic of one or more controllers of the control module 1704 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 1702 may be integrated on the same die as logic of one or more controllers of the control module 1704. For one embodiment, at least one of the processor(s) 1702 may be integrated on the same die as logic of one or more controllers of the control module 1704 to form a system on a chip (SoC).
In various embodiments, the apparatus 1700 may be, but is not limited to being: a server, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.), among other terminal devices. In various embodiments, the device 1700 may have more or fewer components and/or different architectures. For example, in some embodiments, the apparatus 1700 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and a speaker.
The device 1700 may employ a main control chip as a processor or a control module, the sensor data, the location information, etc. are stored in a memory or NVM/storage device, the sensor group may be an input/output device, and the communication interface may include a network interface.
The embodiment of the application also provides electronic equipment, which comprises: a processor; and a memory having executable code stored thereon that, when executed, causes the processor to perform a method as described in one or more of the embodiments herein.
Embodiments also provide one or more machine-readable media having executable code stored thereon that, when executed, cause a processor to perform a method as described in one or more of the embodiments of the present application.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present embodiments have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the present application.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing has described in detail a data processing method, a data processing apparatus, an electronic device and a storage medium provided in the present application, and specific examples have been applied to illustrate the principles and embodiments of the present application, and the above examples are only used to help understand the method and core idea of the present application; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. A method of data processing, the method comprising:
acquiring target media data;
decoding video data in the target media data by using a hard decoding technology or a first network intermediate code; the hardware is used for decoding according to the interface of the hardware by the hard decoding technology; the first network intermediate code is an intermediate code compiled according to a first preset language code with decoding capability of preset video coding characteristics, and the first network intermediate code can run in a browser environment;
Decoding audio data in the target media data by using a second network intermediate code; the second network intermediate code is an intermediate code compiled according to a second preset language code with decoding capability of preset audio coding characteristics, and the second network intermediate code can run in a browser environment.
2. The method of claim 1, wherein the obtaining the target media data comprises:
acquiring media data from a server according to the data channel; the data channel is a communication link based on a webpage multimedia communication technology, and the mode of the data channel is an unreliable transmission mode;
and determining target media data according to the media data acquired from the server.
3. The method of claim 2, wherein the obtaining media data from the server comprises:
determining a communication protocol corresponding to the received data;
and when the communication protocol is a first communication protocol, taking the received data corresponding to the first communication protocol as media data.
4. A method according to claim 3, wherein the first communication protocol is UDP protocol.
5. A method according to claim 3, characterized in that the method further comprises:
and under the condition that the communication protocol is a second communication protocol, taking the received data corresponding to the second communication protocol as the custom data.
6. The method according to any one of claims 1 to 5, wherein decoding audio data in the target media data using a second network intermediate code, comprises:
the main thread initializes a plurality of audio decoding threads;
the main thread sends an audio decoding task corresponding to the audio data in the target media data to an audio decoding thread in an idle state, so that the audio decoding thread in the idle state decodes the audio decoding task by using a second network intermediate code; wherein an instance corresponding to the second network intermediate code corresponds to an audio decoding task.
7. The method of claim 2, wherein the determining the target media data based on the media data obtained from the server comprises:
processing the media data acquired from the server by using a third network intermediate code to acquire target media data; the third network intermediate code is an intermediate code compiled according to a third preset language code with processing capability of service quality, and the third network intermediate code can run in a browser environment.
8. A data processing apparatus, the apparatus comprising:
the acquisition module is used for acquiring target media data;
the video decoding module is used for decoding video data in the target media data by utilizing a hard decoding technology or a first network intermediate code; the hardware is used for decoding according to the interface of the hardware by the hard decoding technology; the first network intermediate code is an intermediate code compiled according to a first preset language code with decoding capability of preset video coding characteristics, and the first network intermediate code can run in a browser environment;
an audio decoding module, configured to decode audio data in the target media data using a second network intermediate code; the second network intermediate code is an intermediate code compiled according to a second preset language code with decoding capability of preset audio coding characteristics, and the second network intermediate code can run in a browser environment.
9. An electronic device, comprising: a processor; and
a memory having executable code stored thereon that, when executed, causes the processor to perform the method of any of claims 1-7.
10. One or more machine readable media having executable code stored thereon that, when executed, causes a processor to perform the method of any of claims 1-7.
CN202310601530.0A 2023-05-22 2023-05-22 Data processing method, device, equipment and medium Pending CN116567321A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310601530.0A CN116567321A (en) 2023-05-22 2023-05-22 Data processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310601530.0A CN116567321A (en) 2023-05-22 2023-05-22 Data processing method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN116567321A true CN116567321A (en) 2023-08-08

Family

ID=87486033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310601530.0A Pending CN116567321A (en) 2023-05-22 2023-05-22 Data processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN116567321A (en)

Similar Documents

Publication Publication Date Title
US11653036B2 (en) Live streaming method and system, server, and storage medium
US20220263885A1 (en) Adaptive media streaming method and apparatus according to decoding performance
US10412130B2 (en) Method and apparatus for playing media stream on web browser
US10567809B2 (en) Selective media playing method and apparatus according to live streaming and recorded streaming
EP2120157B1 (en) Systems, methods and mediums for making multimedia content appear to be playing on a remote device
US10140105B2 (en) Converting source code
WO2017219896A1 (en) Method and device for transmitting video stream
US10979785B2 (en) Media playback apparatus and method for synchronously reproducing video and audio on a web browser
CN111147947B (en) Websocket-based flv video transmission and webpage playing method
CN110870282B (en) Processing media data using file tracks of web content
US10862940B1 (en) Low latency live video on a communication session
CN108337246B (en) Media playback apparatus and media service apparatus preventing playback delay
US20180213274A1 (en) Apparatus and method for playing back and seeking media in web browser
CN112653700A (en) Website video communication method based on WEBRTC
KR101942269B1 (en) Apparatus and method for playing back and seeking media in web browser
US20150341634A1 (en) Method, apparatus and system to select audio-video data for streaming
CN113938470B (en) Method and device for playing RTSP data source by browser and streaming media server
WO2023040825A1 (en) Media information transmission method, computing device and storage medium
CN114745361B (en) Audio and video playing method and system for HTML5 browser
CN115865884A (en) Network camera data access device and method, network camera and medium
US9571790B2 (en) Reception apparatus, reception method, and program thereof, image capturing apparatus, image capturing method, and program thereof, and transmission apparatus, transmission method, and program thereof
WO2023142665A1 (en) Image processing method and apparatus, and computer device, storage medium and program product
CN116567321A (en) Data processing method, device, equipment and medium
WO2022116822A1 (en) Data processing method and apparatus for immersive media, and computer-readable storage medium
CN111385081A (en) End-to-end communication method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination