CN116781918A - Data processing method and device for web page real-time communication and display equipment - Google Patents

Data processing method and device for web page real-time communication and display equipment Download PDF

Info

Publication number
CN116781918A
CN116781918A CN202210219573.8A CN202210219573A CN116781918A CN 116781918 A CN116781918 A CN 116781918A CN 202210219573 A CN202210219573 A CN 202210219573A CN 116781918 A CN116781918 A CN 116781918A
Authority
CN
China
Prior art keywords
image data
data
coding
image
coded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210219573.8A
Other languages
Chinese (zh)
Inventor
陈耀宗
郝征科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Electronic Technology Shenzhen Co ltd
Original Assignee
Hisense Electronic Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Electronic Technology Shenzhen Co ltd filed Critical Hisense Electronic Technology Shenzhen Co ltd
Priority to CN202210219573.8A priority Critical patent/CN116781918A/en
Publication of CN116781918A publication Critical patent/CN116781918A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Transfer Between Computers (AREA)

Abstract

The embodiment provides a data processing method, a device and display equipment for web page real-time communication, wherein the display equipment comprises an image collector and a controller, and if the controller does not have coding capability, the coded image data output by the image collector is received. And if the controller has coding capability, receiving the non-coding image data output by the image collector. Therefore, even if the internal chip of the display device does not have coding capability, the capability of the external image collector can be fully utilized without depending on the coding capability of the CPU, and the excessively high occupancy rate of CPU resources is avoided. And the coding quality can also meet the data processing requirement, so that the situation that the video frame loss is blocked obviously during data transmission is avoided, and the use experience of a user is improved.

Description

Data processing method and device for web page real-time communication and display equipment
Technical Field
The present application relates to the field of display devices, and in particular, to a method and an apparatus for processing data in real-time communication of a web page, and a display device.
Background
With the development of intelligent display equipment, the web page real-time communication technology is widely applied to a video call scheme based on the intelligent display equipment. The scheme adopts a WebRTC (Web Real-Time Communication ) open-source technical framework based on a chromanum (Web browser developed by google dominance).
In the concrete implementation process of the chromoum acquisition framework, the data acquired by the camera are divided into two formats. Video data with a resolution of less than 640 x 480 is acquired in YUV (a color coding method) mode. The acquisition is greater than 640 x 480 and is performed according to the mjpeg (Motion Joint Photographic Experts Group, a video compression format) encoding format. The latter will immediately call libYUV (google open source library realizing the mutual conversion, rotation and scaling between various YUV and RGB) library to soft decode the data after obtaining mjpeg from the camera. After the YUV format is solved, the YUV three component data are respectively stored into a video_frame shared memory, a shared memory channel is created, and then two paths of data transmission are carried out. Through layer-by-layer forwarding, one path of data is transmitted to a WebRTC internal coding module for encoding, and the other path of data is transmitted to a rendering module of a Render process for preview playing.
The above-described data processing is easier to perform for a display device having a high-performance chip. Most display devices today have only decoding capability, but coding capability is weak or even lacking. Therefore, if the above data processing process is to be performed, only the CPU (Central Processing Unit/Processor, central processing unit) can be relied on for encoding. The CPU resource occupancy rate is high, the CPU coding quality can not meet the data processing requirement, and finally the video frame loss and the blocking are obvious during data transmission, so that the use experience of a user is affected.
Disclosure of Invention
The application provides a data processing method, a device and display equipment for web page real-time communication, which are used for solving the problems that most of the current display equipment only has decoding capability, and coding capability is weak or even lacks. Therefore, if the data processing is to be performed, the CPU (Central Processing Unit ) can only be relied on for encoding. The CPU resource occupancy rate is high, the CPU coding quality can not meet the data processing requirement, and finally the problem that video frame loss and blocking are obvious during data transmission and the use experience of a user is affected is caused.
In a first aspect, the present embodiment provides a display device, including,
an image collector for collecting non-coded image data or collecting coded image data, wherein the non-coded image data is uncoded image data, and the coded image data is coded image data;
a controller for performing:
when the controller does not have coding capability, receiving the coded image data output by the image collector, packaging the coded image data, and sending the packaged coded image data to receiving end equipment so as to enable an image corresponding to the coded image data to be played on the receiving end equipment;
And when the controller has coding capability, receiving the non-coding image data output by the image collector, coding the non-coding image data, and then sending the coded non-coding image data to the receiving end equipment so as to enable the image corresponding to the non-coding image data to be played on the receiving end equipment.
In a second aspect, the present embodiment provides a data processing apparatus for real-time communication of a web page, where the apparatus is applied to a display device, the apparatus includes:
the data encapsulation module is used for executing: when the controller does not have coding capability, receiving the coded image data output by the image collector, packaging the coded image data, and sending the packaged coded image data to receiving end equipment so as to enable an image corresponding to the coded image data to be played on the receiving end equipment, wherein the coded image data is coded image data;
a data encoding module for performing: and when the controller has coding capability, receiving the non-coding image data output by the image collector, and transmitting the non-coding image data to the receiving end equipment after coding so as to enable the receiving end equipment to play an image corresponding to the non-coding image data, wherein the non-coding image data is non-coding image data.
In a third aspect, the present embodiment provides a data processing method for real-time communication of a web page, where the method is applied to a controller of a display device, and the display device further includes an image collector, where the image collector is configured to perform collection of non-encoded image data, or collection of encoded image data, where the non-encoded image data is uncoded image data, and the encoded image data is encoded image data, and the method includes:
when the controller does not have coding capability, receiving the coded image data output by the image collector, packaging the coded image data, and sending the packaged coded image data to receiving end equipment so as to enable an image corresponding to the coded image data to be played on the receiving end equipment;
and when the controller has coding capability, receiving the non-coding image data output by the image collector, coding the non-coding image data, and then sending the coded non-coding image data to the receiving end equipment so as to enable the image corresponding to the non-coding image data to be played on the receiving end equipment.
The embodiment provides a data processing method, a device and display equipment for web page real-time communication, wherein the display equipment comprises an image collector and a controller, the image collector is used for collecting non-coded image data or collecting coded image data, the non-coded image data is uncoded image data, and the coded image data is coded image data. And if the controller does not have the encoding capability, receiving the encoded image data output by the image collector, packaging the encoded image data, and then sending the packaged encoded image data to receiving end equipment so as to enable the receiving end equipment to play an image corresponding to the encoded image data. And if the controller has encoding capability, receiving the non-encoded image data output by the image collector, encoding the non-encoded image data, and then transmitting the encoded non-encoded image data to the receiving end equipment so as to enable the image corresponding to the non-encoded image data to be played on the receiving end equipment. Therefore, even if the internal chip of the display device does not have coding capability, the capability of the external image collector can be fully utilized without depending on the coding capability of the CPU, and the excessively high occupancy rate of CPU resources is avoided. And the coding quality can also meet the data processing requirement, so that the situation that the video frame loss is blocked obviously during data transmission is avoided, and the use experience of a user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates a usage scenario of a display device according to some embodiments;
fig. 2 shows a hardware configuration block diagram of the control apparatus 100 according to some embodiments;
fig. 3 illustrates a hardware configuration block diagram of a display device 200 according to some embodiments;
FIG. 4 illustrates a software configuration diagram in a display device 200 according to some embodiments;
FIG. 5 illustrates a frame diagram of a data processing system for web page real-time communication in accordance with some embodiments;
FIG. 6 illustrates a data processing implementation flow diagram for web page real-time communication, in accordance with some embodiments;
FIG. 7 illustrates a flow chart of a method of data processing for web page real-time communication, in accordance with some embodiments.
Detailed Description
For the purposes of making the objects and embodiments of the present application more apparent, an exemplary embodiment of the present application will be described in detail below with reference to the accompanying drawings in which exemplary embodiments of the present application are illustrated, it being apparent that the exemplary embodiments described are only some, but not all, of the embodiments of the present application.
It should be noted that the brief description of the terminology in the present application is for the purpose of facilitating understanding of the embodiments described below only and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms "first," second, "" third and the like in the description and in the claims and in the above drawings are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware or/and software code that is capable of performing the function associated with that element.
Fig. 1 is a schematic diagram of a usage scenario of a display device according to an embodiment. As shown in fig. 1, the display device 200 is also in data communication with a server 400, and a user can operate the display device 200 through the smart device 300 or the control apparatus 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes at least one of infrared protocol communication or bluetooth protocol communication, and other short-range communication modes, and the display device 200 is controlled by a wireless or wired mode. The user may control the display apparatus 200 by inputting a user instruction through at least one of a key on a remote controller, a voice input, a control panel input, and the like.
In some embodiments, the smart device 300 may include any of a mobile terminal 300A, a tablet, a computer, a notebook, an AR/VR device, etc.
In some embodiments, the smart device 300 may also be used to control the display device 200. For example, the display device 200 is controlled using an application running on a smart device.
In some embodiments, the smart device 300 and the display device may also be used for communication of data.
In some embodiments, the display device 200 may also perform control in a manner other than the control apparatus 100 and the smart device 300, for example, the voice command control of the user may be directly received through a module configured inside the display device 200 device for acquiring voice commands, or the voice command control of the user may be received through a voice control apparatus configured outside the display device 200 device.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers.
In some embodiments, software steps performed by one step execution body may migrate on demand to be performed on another step execution body in data communication therewith. For example, software steps executed by the server may migrate to be executed on demand on a display device in data communication therewith, and vice versa.
Fig. 2 exemplarily shows a block diagram of a configuration of the control apparatus 100 in accordance with an exemplary embodiment. As shown in fig. 2, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory, and a power supply. The control apparatus 100 may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive to the display device 200, and function as an interaction between the user and the display device 200.
Fig. 3 shows a hardware configuration block diagram of the display device 200 in accordance with an exemplary embodiment.
In some embodiments, display apparatus 200 includes at least one of a modem 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, memory, a power supply, a user interface.
In some embodiments the controller comprises a central processor, a video processor, an audio processor, a graphics processor, RAM, ROM, a first interface for input/output to an nth interface.
In some embodiments, the display 260 includes a display screen component for presenting a picture, and a driving component for driving an image display, for receiving an image signal from the controller output, for displaying video content, image content, and components of a menu manipulation interface, and a user manipulation UI interface, etc.
In some embodiments, the display 260 may be at least one of a liquid crystal display, an OLED display, and a projection display, and may also be a projection device and a projection screen.
In some embodiments, the modem 210 receives broadcast television signals via wired or wireless reception and demodulates audio-video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver. The display apparatus 200 may establish transmission and reception of control signals and data signals with the control device 100 or the server 400 through the communicator 220.
In some embodiments, the detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for capturing the intensity of ambient light; alternatively, the detector 230 includes an image collector such as a camera, which may be used to collect external environmental scenes, user attributes, or user interaction gestures, or alternatively, the detector 230 includes a sound collector such as a microphone, or the like, which is used to receive external sounds.
In some embodiments, the external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, or the like. The input/output interface may be a composite input/output interface formed by a plurality of interfaces.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored on the memory. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, the object may be any one of selectable objects, such as a hyperlink, an icon, or other operable control. The operations related to the selected object are: displaying an operation of connecting to a hyperlink page, a document, an image, or the like, or executing an operation of a program corresponding to the icon.
In some embodiments the controller includes at least one of a central processing unit (Central Processing Unit, CPU), video processor, audio processor, graphics processor (Graphics Processing Unit, GPU), RAM Random Access Memory, RAM), ROM (Read-Only Memory, ROM), first to nth interfaces for input/output, a communication Bus (Bus), and the like.
A CPU processor. For executing operating system and application program instructions stored in the memory, and executing various application programs, data and contents according to various interactive instructions received from the outside, so as to finally display and play various audio and video contents. The CPU processor may include a plurality of processors. Such as one main processor and one or more sub-processors.
In some embodiments, a graphics processor is used to generate various graphical objects, such as: at least one of icons, operation menus, and user input instruction display graphics. The graphic processor comprises an arithmetic unit, which is used for receiving various interactive instructions input by a user to operate and displaying various objects according to display attributes; the device also comprises a renderer for rendering various objects obtained based on the arithmetic unit, wherein the rendered objects are used for being displayed on a display.
In some embodiments, the video processor is configured to receive an external video signal, perform at least one of decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image composition, and the like according to a standard codec protocol of an input signal, and obtain a signal that is displayed or played on the directly displayable device 200.
In some embodiments, a user may input a user command through a Graphical User Interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
In some embodiments, a "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user that enables conversion between an internal form of information and a form acceptable to the user. A commonly used presentation form of the user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
In some embodiments, a system of display devices may include a Kernel (Kernel), a command parser (shell), a file system, and an application program. The kernel, shell, and file system together form the basic operating system architecture that allows users to manage files, run programs, and use the system. After power-up, the kernel is started, the kernel space is activated, hardware is abstracted, hardware parameters are initialized, virtual memory, a scheduler, signal and inter-process communication (IPC) are operated and maintained. After the kernel is started, shell and user application programs are loaded again. The application program is compiled into machine code after being started to form a process.
As shown in fig. 4, a system of display devices may include a Kernel (Kernel), a command parser (shell), a file system, and an application program. The kernel, shell, and file system together form the basic operating system architecture that allows users to manage files, run programs, and use the system. After power-up, the kernel is started, the kernel space is activated, hardware is abstracted, hardware parameters are initialized, virtual memory, a scheduler, signal and inter-process communication (IPC) are operated and maintained. After the kernel is started, shell and user application programs are loaded again. The application program is compiled into machine code after being started to form a process.
As shown in fig. 4, the system of the display device is divided into three layers, an application layer, a middleware layer, and a hardware layer, from top to bottom. The application layer mainly comprises common applications on the television, and an application framework (Application Framework), wherein the common applications are mainly applications developed based on Browser, such as: HTML5 APPs; native applications (Native APPs);
the application framework (Application Framework) is a complete program model with all the basic functions required by standard application software, such as: file access, data exchange, and the interface for the use of these functions (toolbar, status column, menu, dialog box).
Native applications (Native APPs) may support online or offline, message pushing, or local resource access.
The middleware layer includes middleware such as various television protocols, multimedia protocols, and system components. The middleware can use basic services (functions) provided by the system software to connect various parts of the application system or different applications on the network, so that the purposes of resource sharing and function sharing can be achieved.
The hardware layer mainly comprises a HAL interface, hardware and a driver, wherein the HAL interface is a unified interface for all the television chips to be docked, and specific logic is realized by each chip. The driving mainly comprises: audio drive, display drive, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (e.g., fingerprint sensor, temperature sensor, pressure sensor, etc.), and power supply drive, etc.
Coding (encod) is commonly known as serialization (serialization), which serializes objects into byte arrays for network transmission, data persistence, or other purposes. Conversely, decoding (decoding)/deserializing (deserializing) restores the byte array read from the network, disk, etc. to the original object (typically a copy of the original object) to facilitate subsequent business logic operations. When remote cross-procedure service call is performed, a specific encoding and decoding technology is required to encode or decode an object to be transmitted through a network so as to complete remote call.
The audio and video fields adopt analog technology in early stage, and are developed into digital technology at present. The main benefits of digitization are: high reliability, capability of eliminating transmission and storage loss, convenience for computer processing and network transmission, and the like. After digitization, audio and video processing enters the technical field of computers, and the audio and video processing is essentially the processing of computer data. The original video data generated after the image information is acquired has very large data volume, and compression technology is not needed to be considered for some application occasions of direct local play after the image information is acquired. However, in reality, more application occasions relate to transmission and storage of video, the transmission network and the storage device cannot tolerate huge data volume of original video data, and the original video data must be transmitted and stored after being encoded and compressed.
A codec refers to a device or program capable of transforming a signal or a data stream. The transformations referred to herein include the operation of encoding a signal or data stream (typically for transmission, storage or encryption) or extracting an encoded stream. In the field of display devices, codecs mainly function to compress and decompress video signals or image signals.
With the development of intelligent display equipment, the web page real-time communication technology is widely applied to a video call scheme based on the intelligent display equipment. The scheme adopts a WebRTC (Web Real-Time Communication ) open-source technical framework based on a chromanum (Web browser developed by google dominance).
In the concrete implementation process of the chromoum acquisition framework, the data acquired by the camera are divided into two formats. Video data with a resolution of less than 640 x 480 is acquired in YUV (a color coding method) mode. The acquisition is greater than 640 x 480 and is performed according to the mjpeg (Motion Joint Photographic Experts Group, a video compression format) encoding format. The latter will immediately call libYUV (google open source library realizing the mutual conversion, rotation and scaling between various YUV and RGB) library to soft decode the data after obtaining mjpeg from the camera. After the YUV format is solved, the YUV three component data are respectively stored into a video_frame shared memory, a shared memory channel is created, and then two paths of data transmission are carried out. Through layer-by-layer forwarding, one path of data is transmitted to a WebRTC internal coding module for encoding, and the other path of data is transmitted to a rendering module of a Render process for preview playing.
The above-described data processing is easier to perform for a display device having a high-performance chip. Most display devices today have only decoding capability, but coding capability is weak or even lacking. The current video acquisition format only supports the acquisition preview transmission of YUV and mjpeg formats, and hardware encoding and decoding acceleration of a GPU (Graphics Processing Unit, graphics processor) mode cannot be performed in embedded equipment. Therefore, if the above data processing process is to be performed, only the CPU (Central Processing Unit/Processor, central processing unit) can be relied on for encoding. The CPU resource occupancy rate is high, the CPU coding quality can not meet the data processing requirement, and finally the video frame loss and the blocking are obvious during data transmission, so that the use experience of a user is affected.
In order to solve the above problems, the present application provides a display device including at least an image collector and a controller, as shown in fig. 5, which is a frame diagram of a data processing system for real-time communication of web pages. The image collector is used for collecting image data. The image collector can be a camera, and the camera is connected with a controller of the display device and can receive instructions of the controller and send collected image data to the controller; for easy understanding, the image collector is taken as a camera in the scheme for illustration.
In order to clearly illustrate the embodiments of the present application, explanation of some related terms is given below.
Chromium: a web browser developed by google dominance. Google Chrome project was developed primarily for google Chrome applications, whereas CEF (Chromium Embedded Framework, an open source project based on google Chrome) was targeted to provide embeddable browser support for third party applications. CEF isolates the complex code of the underlying chrome and Blink (a typesetting engine) and provides a set of product-level stable APIs (Application Program Interface, application program interfaces), publishes branches that track specific chrome versions, and binary packages. Most of the CEF features provide rich default implementations that allow users to make as few customizations as possible to meet the demand.
CEF3 is a multiprocessing architecture. The Browser (chrome process) is defined as the main process, responsible for window management, interface drawing and network interactions. The rendering of Blink and the execution of Js (language-changing of Web) are put in a separate Render process; in addition, the Render process is responsible for Js Binding (open infrastructure capability with Js engines and Js communications) and access to the Dom nodes. In the default process model, a new Render process is created for each tab page. Other processes are created on demand, such as processes that manage plug-ins, processes that handle synthesis acceleration, and the like.
The processes of CEF3 may communicate with each other via IPC (Inter-Process Communication ). The Browser and Render processes may communicate bi-directionally by sending asynchronous messages. Even in the Render process can register with the asynchronous JavaScript API to which the Browser process responds.
WebRTC (Web Real-Time Communication ): conventional Web data updates require refreshing Web pages to display the updated content. The Browser adopts a Browser/Server (Browser/Server) architecture, and the Browser/Server architecture is based on the HTTP protocol. The HTTP protocol is a working mode in which a client sends a request to a server, and the server returns a response after receiving the request. This mode of operation is based on the requested display data.
Such a mode of operation has its own benefits, but also causes a number of problems. Today, web applications are getting more and more popular, often meeting the need for servers to actively send data to clients, such as event pushing, web chat, etc. These requirements cannot be achieved using the conventional Web data update operation mode, and thus a new technology is required: web real-time communication technology.
WebRTC implements a Web-based voice conversation or video conversation, with the goal of no plugin's ability to implement real-time communications for the Web end. WebRTC provides the core technology of video conferencing, including the functions such as collection, codec, network transmission, show of audio and video. The WebRTC embeds multimedia modules (including processing modules such as media stream capturing and encoding and decoding, denoising, jitter buffering, image enhancement, etc.), protocols such as network transmission, session management, signaling abstraction, etc., required for real-time communication applications into the Web browser, eliminating the differences between the underlying hardware and the operating system. The WebRTC establishes a channel suitable for data stream transmission through point-to-point transmission between the browsers, so that communication between the browsers can rely on the dependence of the intermediate server.
The core module in the WebRTC framework includes:
and the session management/signaling session abstract module is an abstract session layer and realizes the functions of establishing and managing the deflection session. The video engine module mainly provides an integral framework for video processing, and comprises a process from capturing images by a camera to transmitting and finally displaying the video. A number of underlying APIs are encapsulated in the module to implement audio processing functions.
Multiple multimedia types (including RGB format, YUY2 format, UYUY format, etc.) are supported on the acquisition of video images, and the size and frame rate of video frames can be controlled. Meanwhile, the aim of supporting most video acquisition equipment is fulfilled by realizing the acquisition of equipment information and video data of enumerated videos, so that the efficiency of video image acquisition and processing is greatly improved, and the flexibility is also enhanced. In terms of video encoding and decoding, an I420 (YUV standard format 4:2:0)/VP 8 (video encoder and decoder developed and promoted by google) video image encoding and decoding technology can be adopted, and VP8 can ensure that the quality of video is improved with smaller amount of data, so that the video encoding and decoding method is suitable for the transmission of the real-time video. In order to obtain video images with higher quality, noise reduction, color enhancement, brightness detection and the like are required to be performed in the process of processing the video images of each frame.
The audio engine module mainly provides an overall framework and a solution for audio processing and comprises the whole process from acquisition to a network setting-up end. The method specifically comprises the processing of equipment, audio encoding and decoding, audio encryption, sound files, volume control and the like. In this module, a number of underlying APIs are encapsulated to implement audio processing functions.
And the transmission module is mainly used for ensuring the transmission function of the media stream. WebRTC mainly uses the secure real-time transport protocol (SRTP, secureReal-time Transport Protocol) to ensure a more secure and reliable transport of audio and video streams.
YUV: one type of image format used in video, picture, camera, etc. applications is in fact the name of the color space that is common to all "YUV" pixel formats. Unlike the RGB format (red-green-blue), YUV is represented by a "luminance" component called Y (corresponding to gray) and two "chrominance" components, called U (blue projection) and V (red projection), respectively, and is thus named.
libYUV: google open source implementation various libraries for conversion, rotation, scaling between YUV and RGB.
Shared memory: shared memory is one of the most efficient modes of inter-process communication. Because the process can directly read and write the memory, no duplication of any data is needed. To exchange information among multiple processes, the kernel exclusively leaves a block of memory. This section of memory area can be mapped to its own private address space by the process that needs to access. Therefore, the process can directly read and write the memory area without copying data, thereby greatly improving the efficiency. Of course, since multiple processes share a piece of memory, some synchronization mechanism, such as mutual exclusion lock and semaphore, is also required.
The implementation of the shared memory is divided into two steps, the first step is to create the shared memory, the function used here is shmet (), i.e. a section of shared memory area is obtained from the memory, and the second step is to map the shared memory created by this section into a specific process space, the function used here is shmat (). Here, the shared memory may be used, i.e., may be operated on using unbuffered I/O (Input/Output) read/write commands.
Enumerating: in C/C++, enumeration is a set of named integer constants, which are common in everyday life. For example SUNDAY, MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, which represents the week, is an enumeration. The enumerated descriptions are similar to structures and associations.
v4I2 (Video for Linux 2): linux is related to kernel drivers for video devices. In Linux, the video device is a device file, which can be read and written as if it were accessing a normal file, and the camera is at/dev/video 0. Is a set of video frame specially designed for Linux equipment, the main body frame of the video frame is arranged in a Linux kernel, and can be understood as the whole Linux
A video source capture drive framework above the system. The method is widely applied to embedded equipment, mobile terminals and personal computer equipment, and the coded products in the market are as follows: the frame is used for video acquisition by the mobile phone and the automobile data recorder.
The embodiment of the application takes the display equipment as an example to specifically describe the technical scheme of data processing of real-time communication of the webpage.
The image collector performs collection of non-encoded image data or collection of encoded image data. The non-encoded image data is non-encoded image data and the encoded image data is encoded image data. When the image collector collects the coded image data, the coded image data is obtained after the non-coded image data is subjected to coding processing after the non-coded image data is collected. The image pickup device of the present embodiment thus has encoding capability.
If the controller does not have encoding capability, the image collector is required to collect encoded image data, in which case the image collector collects encoded image data and outputs the encoded image data to the controller. Further, the controller can send the data received from the image collector to the receiving end device after directly packaging the data without encoding the data. And after receiving the image data, the receiving end equipment plays an image corresponding to the image data on the player.
If the controller has encoding capability, no image data is required that has been encoded by the image collector, in which case the image collector collects non-encoded image data and outputs the non-encoded image data to the controller. Further, no encoding is performed due to the image data received from the image collector. Therefore, the controller needs to encode the image data before transmitting to the receiving end device.
The display device provided by the embodiment comprises an image collector and a controller, wherein the image collector is used for collecting non-coded image data or collecting coded image data, the non-coded image data is uncoded image data, and the coded image data is coded image data. And if the controller does not have the encoding capability, receiving the encoded image data output by the image collector, packaging the encoded image data, and then sending the packaged encoded image data to receiving end equipment so as to enable the receiving end equipment to play an image corresponding to the encoded image data. And if the controller has encoding capability, receiving the non-encoded image data output by the image collector, encoding the non-encoded image data, and then transmitting the encoded non-encoded image data to the receiving end equipment so as to enable the image corresponding to the non-encoded image data to be played on the receiving end equipment. Therefore, even if the internal chip of the display device does not have coding capability, the capability of the external image collector can be fully utilized without depending on the coding capability of the CPU, and the excessively high occupancy rate of CPU resources is avoided. And the coding quality can also meet the data processing requirement, so that the situation that the video frame loss is blocked obviously during data transmission is avoided, and the use experience of a user is improved.
Illustratively, the display device X and the display device Y are in a video conference, where both the display device X and the display device Y may serve as a data transmitting end and a data receiving end at the same time, and in this example, the display device X serves as a transmitting end, and the display device Y serves as a receiving end to serve as an example for describing a scheme. The display device X includes a camera and a controller, and the display device Y includes a player and a controller.
If the controller of the display device X is provided with encoding capability, no camera is required to collect encoded image data. At this time, the camera directly collects uncoded image data. The camera acquires uncoded image data and then outputs the uncoded image data to the controller. The controller encodes the uncoded image data (including the encapsulation process) and transmits it to the display device Y.
If the controller of the display device X does not have encoding capability, a camera is required to acquire encoded image data. After the controller receives the encoded image data, the controller does not need to encode the image data any more, and the encoded image data is directly packaged and then sent to the display device Y.
The camera of the display device X may collect both encoded image data and non-encoded image data. The capability information of the controller may be obtained first, and whether the controller has the encoding capability may be determined from the capability information.
Taking the example of the coding format H264 as an example, the specific implementation process of this embodiment is shown in fig. 6, and specifically includes:
the display device of the present embodiment is installed with a browser supporting WebRTC technology. The browser can be an IE browser, a fire fox browser, a Chrome browser, a Safari browser and other mainstream browsers. The present embodiment is illustrated using a chromoum browser as an example.
The WebRTC is introduced as a third party library of the chromonum and placed in a render process, and a webser process of the native chromonum repackages a WebRTC acquisition frame to complete enumeration of equipment such as an external camera microphone and the like, and simultaneously complete functions such as equipment information acquisition, data acquisition, transmission and the like. In this embodiment, the browser negotiates with the WebRTC signaling server to determine that a media transmission channel is established between the browser and the image collector, and receives the video stream from the image collector.
The browser may negotiate with the WebRTC signaling server in advance to determine a media transmission channel for building between the browser and the image collector, which may be a long connection channel. Then, a media transmission channel can be constructed between the browser and the image collector, and the getUserMedia method of WebRTC is used for obtaining the video stream, so that the video stream from the image collector can be received through the constructed media transmission channel, and the received video stream can be a real-time video stream.
In the acquisition framework, capture (reading and calling) of video data is completed by a v4l2 API on a Linux platform. Instead of opening up a communication between a browser and WebSocket server like WebSocket (a new network protocol based on TCP), a channel between a browser and browser (peer-to-peer) is established through a series of signaling, which can send any data without going through the server. And WebRTC invokes a camera and a microphone of the device by implementing MediaStream, so that audio and video stream data can be transferred between browsers.
After the image collector collects the video image, the image collector encodes the video image data to obtain encoded video frame data. The WebRTC thus obtains encoded video frame data. And the controller outputs the encoded data to an encoding module inside the WebRTC. The coding module does not perform coding work, is used for extracting information from the coded data, and fills the information into the transmitting unit according to a preset format to perform subsequent packaging transmission. After the WebRTC obtains the encoded video frame data, extracting video frame information from the encoded video frame data, and encapsulating the video frame information to obtain a video file, where the video file is used for being sent to a receiving end and played on a player at the receiving end. Here, the receiving end may be a display device, or may be other terminal devices with a player.
Specifically, after the WebRTC obtains the encoded video frame data, H264 data is fully filled into Yplane Y components, data length information is formatted according to string (character string) and then filled into upland, vplane is reserved as null ptr (c++ hollow pointer type key word), null ptr is used for initializing objects, and the original scheme of the shared memory channel is kept unchanged. In Linux, each process has its own Process Control Block (PCB) and address Space (Addr Space), and has a page table corresponding to it, which is responsible for mapping virtual addresses and physical addresses of the process, and is managed by a Memory Management Unit (MMU). Two different virtual addresses are mapped to the same region of physical space, the region to which they point, i.e., shared memory, through page tables.
The video_frame shared memory is collected by a browser and then transmitted to a Render process according to an original mode, and is divided into two paths for processing at the moment, and one path is transmitted to an encoding module in the WebRTC.
In the WebRTC encoding module, the VideoStreamEncoder class is responsible for callback of data from the camera module. The camera data is received in an OnFrame function interface callback. Here, by adding a fack_h264_encoder_imp class, this class does not do any more encoding work (if the callback data is not H264 encoded data, this class still does encoding work). This class mainly implements the following new functions:
And analyzing onFrame in the OnFrame callback function, and judging whether buffer information in Upplane contains H264 encoded length information or not. If so, the length information is extracted. After the encoded data is determined, the buffer encoded data in the Yplane of the video_frame is taken out, each frame is analyzed, the Unit type of each frame is determined, the slice number and size of each frame data, and the starting byte position are determined. And finally, filling the information extracted in the steps into a transmitting unit according to the rtp transmission format, and packaging and transmitting.
In some embodiments, within the chromanu acquisition framework: v4l2_capture_delete is used to declare delegation. A delegate is a type of reference and any process with matching parameter types and return types can be used to create instances of such delegate classes. The procedure may then be invoked by the delegated instance. The use of delete delivers event delivery event is what happens to hope a, hope B knows and reacts some way within its class. Determining that an event may be performed when a needs to perform an event, a does not determine whether it is executable, and it is desirable that B add encoded class data types, such as H264, H265, VP8, VP9, etc., in response to the extended methods in the next class.
In some embodiments, the image collector acquisition data type prioritization may be changed. The capability of video capture devices is first obtained to see if it supports direct output of encoded data. If so, sorting according to the coding types, for example, an external camera supports four types of acquisition modes of H265, H264, mjpeg and YUV, and setting the type priority of capture data as H265 > H264 > mjpeg > YUV.
In some embodiments, the image corresponding to the encoded image data may be played on the player, or the image corresponding to the non-encoded image data may be played on the player. Namely, if the image collector collects the coded image data, the local machine directly decodes the coded image data and then plays the corresponding image on the local player. If the image collector collects non-coding data, the local machine encodes and decodes the non-coding data and then plays the corresponding image on the local player.
The specific implementation process of playing the corresponding image of the coded image data is as follows: the other path of data acquired from the shared memory channel is transmitted to the chromoum Render preview play module, and the part mainly completes the preview function of the acquired data. The webmediaplayer_ms interface class is an interface for realizing data preview, the transmitted encoded data is received in an OnVideoframe, and the main newly added module is responsible for decoding and playing encoded data of a camera, so that the following new functions are mainly realized: and adding judgment of video_frame content in the OnVideoFrame, and if the data content in Upplane contains an H264 encoded length character string, extracting H264 in Yplane in the received video_frame, and injecting the extracted H264 into a ffmpeg decoding module for decoding. The solved YUV is refilled into the video_frame shared memory, and the YUV data is directly rendered during preview.
The format of the acquired image data in the application is not limited to H264, and can comprise all encoding formats on the market, but the corresponding data processing end needs to process the corresponding encoding formats. In addition, H264 data can be imported into a decoding module of hardware to carry out hardware decoding, so that resource consumption can be further reduced.
The Web application based on WebRTC in this embodiment performs signaling parsing through the Web server. And then, each browser transmits the locally acquired audio and video data to the opposite-end browser. And finally, each browser processes and displays the local and opposite multimedia data.
The application also provides an optional application scene, which is a signaling interaction processing flow between terminals for interacting data in the embodiment, and if terminal equipment such as mobile phones, computers, network end audio/video data and the like needs to interact data with a display equipment terminal. Firstly, they need to communicate with a WebRTC server integrated on a Web application running platform end in a display device, the WebRTC server registers own identity attribute information, tells the display device end which terminal devices want to send audio and video data, and after the display device end receives a corresponding request, the display device end can establish a WebRTC data connection channel with terminal devices such as a mobile phone and a computer, and subsequent audio and video data streams can be transmitted to the display device platform for display through the WebRTC data connection channel.
It should be noted that the server design includes a signaling server and a Web server. For the Web server, https service is adopted, and the aim is to maximally ensure information transmission safety. WebRTC implements browser-to-browser communication, but the signaling exchanged between browsers to establish communication must be used to the server during the communication. The signaling server needs to be established according to the system requirement, and the whole signaling transmission process is mainly realized by selecting a mainstream WebSocket communication protocol and combining with a Simple Web RTC.
The signaling message processing flow of the connection establishment process, such as signaling, is necessary that data transfer cannot be performed between the browsers before the connection between the peers is established, so that the connection needs to be transferred through a corresponding server, and then a point-to-point connection between the browsers is established. The connection process is mainly completed by establishing a communication channel between a client and a server through WebSocket and transmitting two signaling of an Offer and an Answer. The signaling server mainly completes the functions of scheduling and controlling, and the communication between the browsers is realized through a series of signaling.
There is obviously no way for users to transfer data between browsers before a connection is established, so it is necessary to transfer the data between the browsers through the transit of the server, and then establish a point-to-point connection between the browsers, but the WebRTC API does not realize the above functions and must rely on the signaling server. And signaling mainly comprises the following aspects: 1) Connection control message: including controlling communication on and off; 2) Media stream metadata message: data including bandwidth, media type, etc.; 3) Data on network: data including IP address and port; 4) A reminding message when an error occurs; compared with a signaling server, the Web server mainly works to complete interaction between a browser user and the Web, and in fact, an HTML page is pushed to a user side.
Based on the same inventive concept, the present application also provides a data processing apparatus for real-time communication of web pages, which is applied to a display device, and fig. 6 shows that the apparatus in this embodiment includes:
the data encapsulation module is used for executing: when the controller does not have coding capability, receiving the coded image data output by the image collector, packaging the coded image data, and sending the packaged coded image data to receiving end equipment so as to enable an image corresponding to the coded image data to be played on the receiving end equipment, wherein the coded image data is coded image data;
a data encoding module for performing: and when the controller has coding capability, receiving the non-coding image data output by the image collector, and transmitting the non-coding image data to the receiving end equipment after coding so as to enable the receiving end equipment to play an image corresponding to the non-coding image data, wherein the non-coding image data is non-coding image data.
In some embodiments, the apparatus further comprises: a preview playing module, configured to perform: and playing the image corresponding to the coded image data on a player of the display device or playing the image corresponding to the non-coded image data on the player.
The application can be a hardware module or a software program algorithm module, can realize the coding processing of the image data shot by the camera by arranging the coding module in the camera, and can simultaneously receive the feedback coding parameters of the virtual encoder of the A terminal equipment to realize the control of the coding process. Converting the acquired RGB image information into YUV image frames after video acquisition, wherein in the YUV image frames, all the color components have the same resolution, namely the image frames in YUV format are split by the same number of pixels in each component, so as to respectively acquire Y component image frames, U component image frames and V component image frames, and splitting the V component image frames into 4 subframes again; combining the Y component image frame with two V component subframes creates an image frame in a first YUV420 format, and combining the U component image frame with two other V component subframes creates an image frame in a second YUV420 format: and H264 coding is respectively carried out on the two YUV420 formatted image frames, so as to obtain two paths of H264 code stream data.
Respectively decoding two paths of H264 code stream data through an H264 algorithm to obtain two paths of YUV format image frames; the Y-component image frames of the first path of YUV image frames are extracted to be filled in the Y-component image frame positions of the YUV image frames, the Y-component image frames of the second path of YUV image frames are extracted to be filled in the U-component image frame positions of the YUV image frames, and the UV-component image frames of the two paths of YUV image frames are extracted to be filled in the V-component image frame positions of the YUV image frames. And combining the two paths of YUV image frames into one path of YUV image frame, displaying the one path of YUV image frame, and carrying out channel information identification on the YUV video frame so as to carry out coding of corresponding channels by different encoders. Channel information of the two separated images is mainly identified, wherein corresponding frame numbers and image data information are added.
And storing the data in the coded image in a magnetic disk, taking out the data when the data is played back, decoding and fusing according to the channel frame number identification, fusing the YUV image frames decoded by the two channels into one YUV image frame, and displaying the image information on a screen. The multiple encoders are used for compressing the separated image information at the same time, and the requirement of picture information details is met on the premise of not affecting compression efficiency and effect. And transmitting the image frame data of the different channel information to a corresponding encoding processor for encoding according to the identified image information.
The application also provides a Web application running platform, the Web application running platform is pre-built in the display device, the display device machine integrates an HTML5 (a language description mode for building Web content) application based on the Web application running platform, the HTML5 application is a Web application based on an HTML5 standard, and the HTML5 application is displayed on a desktop of the display device in an icon mode and can run in the same starting mode as the local application.
The Web application running platform refers to a platform or a running environment capable of supporting the running of an HTML5 application. The Web application running platform is an environment where the HTML5 application is displayed and executed on the terminal, is not limited to a cross walk architecture, and can be an operating system HTML5 running environment, a browser engine, and the like. The browser kernel is the core of the Web application running platform. The Web application running platform realized in the scheme refers to the chromatography kernel to perform the support of rendering and operation on the application, and meanwhile, an application packaging and management function is added, so that the HTML5 application can run in the display equipment platform like the application in the display equipment platform.
The Web application operation platform module comprises a Web application management unit and a chromoum kernel management unit besides a Web RTC server unit of a core;
the Web application management unit is responsible for providing functions such as packaging and management of the application, so that the HTML5 application can be operated in the display equipment platform like the application in the display equipment platform, and the HTML5 application can be operated in the display equipment platform in the same starting mode as the local application; the extension functions of the APIs of the device and the management of the application lifecycle are responsible; the system is responsible for expanding and integrating a WebRTC server unit;
the device comprises a chromoum kernel management unit, a HTML5 application, a display device platform and a display device management unit, wherein the chromoum kernel management unit refers to the chromoum kernel and loads and renders and displays the HTML5 application in the display device platform; the module can be updated in time along with the upgrading of the chromoum kernel so as to provide better operation and performance support for the HTML5 application; the module is separated from the limitation of the browser kernel version of the platform, so that the chromium kernel is packaged in the application in the form of a dynamic library, and the limitation that the previous HTML5 application needs to adapt to the browser kernels of different versions on different platforms is solved.
WebSocket is used as a protocol mode of HTML5, and bidirectional communication between the browser and the server is realized. In websocket api, the browser and the server only need to do a handshake, and then a fast channel is formed between the browser and the server. The two can directly transmit data to each other.
The embodiment of the application provides a data processing method for web page real-time communication, as shown in a flow chart of the data processing method for web page real-time communication in fig. 7, wherein the method is applied to a controller of display equipment, the display equipment further comprises an image collector, the uncoded image data is uncoded image data, the coded image data is coded image data, and the method comprises the following steps:
step one, if the controller does not have coding capability, receiving the coded image data output by the image collector, packaging the coded image data, and sending the packaged coded image data to receiving end equipment so as to enable the receiving end equipment to play an image corresponding to the coded image data.
And step two, if the controller has coding capability, receiving the non-coding image data output by the image collector, and transmitting the non-coding image data to the receiving end equipment after coding so as to enable the image corresponding to the non-coding image data to be played on the receiving end equipment.
The same or similar content may be referred to each other in each embodiment of the present application, and the related embodiments will not be described in detail.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A display device, characterized by comprising:
an image collector for collecting non-coded image data or collecting coded image data, wherein the non-coded image data is uncoded image data, and the coded image data is coded image data;
A controller for performing:
when the controller does not have coding capability, receiving the coded image data output by the image collector, packaging the coded image data, and sending the packaged coded image data to receiving end equipment so as to enable an image corresponding to the coded image data to be played on the receiving end equipment;
and when the controller has coding capability, receiving the non-coding image data output by the image collector, coding the non-coding image data, and then sending the coded non-coding image data to the receiving end equipment so as to enable the image corresponding to the non-coding image data to be played on the receiving end equipment.
2. The display device of claim 1, further comprising a player, the controller further configured to perform:
and playing the image corresponding to the coded image data on the player, or playing the image corresponding to the non-coded image data on the player.
3. The display device of claim 1, wherein the controller is further configured to, prior to receiving the data output by the image collector, perform:
within the chroma acquisition framework, a coding class data type is added to the v4l2_capture_delete class.
4. A display device according to claim 3, wherein the controller is further configured to perform: and receiving the data output by the image collector according to the collection mode priority, wherein the collection mode priority prescribes the data type priority of the data output by the image collector.
5. The display device according to claim 1, wherein the specific step of transmitting the encoded image data to the receiving device after the encapsulation process is:
filling the encoded image data into a Y component, formatting data length information according to string, filling the encoded image data into a U component, and reserving a V component as a nullptr to obtain shared memory data;
and extracting the filling information from the shared memory data, and filling the filling information into a sending module to obtain the encoded image data after the encapsulation processing, wherein the filling information at least comprises length information, unit types of video frames, the number and the size of the slices of the video frames and the initial byte position of the video frames.
6. The display device according to claim 5, wherein the filling information is filled in the transmitting module, and the specific steps are: and filling the filling information into a sending module according to the rtp sending format.
7. A data processing apparatus for real-time communication of web pages, the apparatus being applied to a display device, the apparatus comprising:
the data encapsulation module is used for executing: when the controller does not have coding capability, receiving the coded image data output by the image collector, packaging the coded image data, and sending the packaged coded image data to receiving end equipment so as to enable an image corresponding to the coded image data to be played on the receiving end equipment, wherein the coded image data is coded image data;
a data encoding module for performing: and when the controller has coding capability, receiving the non-coding image data output by the image collector, and transmitting the non-coding image data to the receiving end equipment after coding so as to enable the receiving end equipment to play an image corresponding to the non-coding image data, wherein the non-coding image data is non-coding image data.
8. The data processing apparatus for real-time communication of web pages as recited in claim 7, wherein the apparatus further comprises:
a preview playing module, configured to perform: and playing the image corresponding to the coded image data on a player of the display device or playing the image corresponding to the non-coded image data on the player.
9. A data processing method for real-time communication of web pages, the method being applied to a controller of a display device, the display device further comprising an image collector, characterized in that the image collector is configured to perform collection of non-encoded image data, or collection of encoded image data, wherein the non-encoded image data is uncoded image data, and the encoded image data is encoded image data, the method comprising:
when the controller does not have coding capability, receiving the coded image data output by the image collector, packaging the coded image data, and sending the packaged coded image data to receiving end equipment so as to enable an image corresponding to the coded image data to be played on the receiving end equipment;
and when the controller has coding capability, receiving the non-coding image data output by the image collector, coding the non-coding image data, and then sending the coded non-coding image data to the receiving end equipment so as to enable the image corresponding to the non-coding image data to be played on the receiving end equipment.
10. The data processing method for real-time communication of a web page according to claim 9, wherein the display device further comprises a player, the method further comprising: and playing the image corresponding to the coded image data on the player, or playing the image corresponding to the non-coded image data on the player.
CN202210219573.8A 2022-03-08 2022-03-08 Data processing method and device for web page real-time communication and display equipment Pending CN116781918A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210219573.8A CN116781918A (en) 2022-03-08 2022-03-08 Data processing method and device for web page real-time communication and display equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210219573.8A CN116781918A (en) 2022-03-08 2022-03-08 Data processing method and device for web page real-time communication and display equipment

Publications (1)

Publication Number Publication Date
CN116781918A true CN116781918A (en) 2023-09-19

Family

ID=88008600

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210219573.8A Pending CN116781918A (en) 2022-03-08 2022-03-08 Data processing method and device for web page real-time communication and display equipment

Country Status (1)

Country Link
CN (1) CN116781918A (en)

Similar Documents

Publication Publication Date Title
US11120677B2 (en) Transcoding mixing and distribution system and method for a video security system
US9264478B2 (en) Home cloud with virtualized input and output roaming over network
US8170123B1 (en) Media acceleration for virtual computing services
CN114302190B (en) Display equipment and image quality adjusting method
US20130254417A1 (en) System method device for streaming video
US9860285B2 (en) System, apparatus, and method for sharing a screen having multiple visual components
WO2022257699A1 (en) Image picture display method and apparatus, device, storage medium and program product
CN113507638B (en) Display equipment and screen projection method
CN112667184A (en) Display device
CN103605535A (en) Operation method and system of intelligent display device, intelligent display device and mobile device
CN113535063A (en) Live broadcast page switching method, video page switching method, electronic device and storage medium
US9729931B2 (en) System for managing detection of advertisements in an electronic device, for example in a digital TV decoder
CN113453069B (en) Display device and thumbnail generation method
CN116781918A (en) Data processing method and device for web page real-time communication and display equipment
CN115278323A (en) Display device, intelligent device and data processing method
CN116980554A (en) Display equipment and video conference interface display method
EP3229478B1 (en) Cloud streaming service system, image cloud streaming service method using application code, and device therefor
CN114938408A (en) Data transmission method, system, equipment and medium of cloud mobile phone
CN113691858A (en) Display device and interface display method
WO2016107174A1 (en) Method and system for processing multimedia file data, player and client
CN114302203A (en) Image display method and display device
CN113038221B (en) Double-channel video playing method and display equipment
CN113099308B (en) Content display method, display equipment and image collector
CN115174991B (en) Display equipment and video playing method
CN114630101B (en) Display device, VR device and display control method of virtual reality application content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination