CN116546235A - Display equipment and inter-process communication method - Google Patents

Display equipment and inter-process communication method Download PDF

Info

Publication number
CN116546235A
CN116546235A CN202210093739.6A CN202210093739A CN116546235A CN 116546235 A CN116546235 A CN 116546235A CN 202210093739 A CN202210093739 A CN 202210093739A CN 116546235 A CN116546235 A CN 116546235A
Authority
CN
China
Prior art keywords
client
media data
data
server
shared memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210093739.6A
Other languages
Chinese (zh)
Inventor
朱瑞亮
任飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Electronic Technology Wuhan Co ltd
Original Assignee
Hisense Electronic Technology Wuhan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Electronic Technology Wuhan Co ltd filed Critical Hisense Electronic Technology Wuhan Co ltd
Priority to CN202210093739.6A priority Critical patent/CN116546235A/en
Publication of CN116546235A publication Critical patent/CN116546235A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application provides a display device and an inter-process communication method, wherein when a client and a server, which belong to a first process and a second process in the display device, carry out data transmission, the server firstly stores media data in an anonymous memory, avoids directly transmitting the media data with larger data quantity to the client, but adopts a Binder communication mechanism, and transmits address information to the client through a shared memory so as to indicate the storage position of the media data in the anonymous memory to the client through the address information. The client can directly acquire the data to be displayed from the anonymous memory according to the address information. By the inter-process communication method, the direct transmission of the media data with larger data volume between the client and the server can be avoided, so that the data transmission time is saved, and the transmission delay of the media data is effectively reduced.

Description

Display equipment and inter-process communication method
Technical Field
The application relates to the technical field of intelligent display equipment, in particular to display equipment and an inter-process communication method.
Background
The display device refers to a terminal device capable of outputting a specific display screen, and may be a terminal device such as a smart television, a mobile terminal, a smart advertisement screen, and a projector. Taking intelligent electricity as an example, the intelligent television is based on the Internet application technology, has an open operating system and a chip, has an open application platform, can realize a bidirectional man-machine interaction function, and is a television product integrating multiple functions of video, entertainment, data and the like, and the intelligent television is used for meeting the diversified and personalized requirements of users.
The media functions of the display device are implemented by corresponding clients (clients) that provide processing functions for the media data, such as identifying the media data, business logic processing, integrating with other functional modules to construct a richer presentation scenario, etc. The Server performs encoding and decoding processing on the acquired audio and video data to obtain media data, and transmits the media data to corresponding clients for use. Typically, the client and the server belong to different processes, and thus, the data transfer between the client and the server is actually a communication between different processes, i.e. inter-process communication (Inter Process Communication, IPC).
The existing communication between the client and the server needs to perform data transmission based on a shared Memory (shared Memory) between two processes, that is, the server caches media data into the shared Memory, and then the client extracts the cached media data from the shared Memory, thereby completing the transmission of the media data between the server and the client. However, the method for communication between processes generates multiple data caching processes, and the data volume transmitted between processes is larger, so that obvious data transmission delay is generated, and the display delay of the data to be displayed is caused, so that the experience of a user is influenced.
Disclosure of Invention
The application provides a display device and an inter-process communication method, which are based on a Binder communication mechanism and transmission data among different processes existing in anonymity so as to effectively improve the efficiency of inter-process communication.
In a first aspect, the present application provides a display device, comprising:
a display configured to display media data corresponding to a client, the media data provided by a server, wherein the client is running in a first process and the server is running in a second process;
a controller configured to:
receiving a data acquisition request sent by the client based on a shared memory, wherein a Binder communication mechanism is adopted for data transmission based on the shared memory;
responding to the data acquisition request, processing audio and video data corresponding to the client to obtain the media data;
and storing the media data in an anonymous memory, and sending address information to the client based on the shared memory so that the client can acquire the media data from the anonymous memory according to the address information.
In one implementation, before the controller receives the data acquisition request sent by the client based on the shared memory, the controller is further configured to:
Receiving a registration request sent by the client, wherein the registration request comprises the name of the client;
and responding to the registration request, adding the mapping relation between the name of the client and the client reference information to a mapping relation table, wherein the mapping relation table is composed of a first mapping relation between the name of each client and the client reference information and a second mapping relation between the name of each data processing method and the method reference information, and the method reference information is used for guiding to the corresponding data processing method.
In one implementation, the data acquisition request includes a name of a target data processing method, and the controller, in response to the data acquisition request, processes audio and video data corresponding to the client to obtain the media data, is configured to:
determining target method reference information corresponding to the name of the target data processing method based on the mapping relation table;
acquiring a corresponding target data processing method according to the target method reference information;
and processing the audio and video data corresponding to the client by using the target data processing method to obtain the media data.
In one implementation, the controller stores the media data in an anonymous memory and sends address information to the client based on the shared memory, configured to:
storing the media data in an anonymous memory;
generating address information corresponding to a storage position of the media data in the anonymous memory;
and sending the address information to the client based on the shared memory.
In one implementation, the data acquisition request includes a name of the client, and the controller is configured to send address information to the client based on the shared memory, and configured to:
determining the client side reference information corresponding to the name of the client side based on the mapping relation table;
and based on the shared memory, sending the address information to the client according to the client reference information.
In a second aspect, the present application provides a display device, comprising:
a display configured to display media data corresponding to a client, the media data provided by a server, wherein the client is running in a first process and the server is running in a second process;
A controller configured to:
sending a data acquisition request to the server based on a shared memory, wherein a Binder communication mechanism is adopted for data transmission based on the shared memory;
receiving address information sent by the server based on the shared memory;
and acquiring the media data from the anonymous memory according to the address information.
In one implementation, before the controller sends the data acquisition request to the server based on the shared memory, the controller is further configured to:
and sending a registration request to the server based on the shared memory, wherein the registration request comprises client reference information, and the client reference information and the name of the client have a mapping relation and are used for guiding to the client.
In one implementation, after the controller obtains the media data from the anonymous memory, the controller is further configured to:
storing the media data into a circular buffer area, wherein the media data stored in the circular buffer area are arranged according to the receiving time;
and extracting the media data with earliest receiving time from the circulating buffer area, and releasing the space occupied by the media data with earliest receiving time in the circulating buffer area.
In a third aspect, the present application provides an inter-process communication method applied to a client in a display device, where the display device is configured to display media data corresponding to the client, where the media data is provided by a server, and the client is running in a first process, and the server is running in a second process, and the method includes:
receiving a data acquisition request sent by the client based on a shared memory, wherein a Binder communication mechanism is adopted for data transmission based on the shared memory;
responding to the data acquisition request, processing audio and video data corresponding to the client to obtain the media data;
and storing the media data in an anonymous memory, and sending address information to the client based on the shared memory so that the client can acquire the media data from the anonymous memory according to the address information.
In a fourth aspect, the present application provides an inter-process communication method applied to a server in a display device, where the display device is configured to display media data corresponding to a client, where the media data is provided by the server, and the client is running in a first process and the server is running in a second process, and the method includes:
Sending a data acquisition request to the server based on a shared memory, wherein a Binder communication mechanism is adopted for data transmission based on the shared memory;
receiving address information sent by the server based on the shared memory;
and acquiring the media data from the anonymous memory according to the address information.
According to the technical scheme, when the client and the server, which belong to the first process and the second process in the display device, carry out data transmission, the server firstly stores media data in the anonymous memory, avoids directly transmitting the media data with larger data quantity to the client, but adopts a Binder communication mechanism, and transmits address information to the client through the shared memory so as to indicate the storage position of the media data in the anonymous memory to the client through the address information. The client can directly acquire the data to be displayed from the anonymous memory according to the address information. By the inter-process communication method, the direct transmission of the media data with larger data volume between the client and the server can be avoided, so that the data transmission time is saved, and the transmission delay of the media data is effectively reduced.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings that are needed in the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a usage scenario of a display device according to an embodiment of the present application;
fig. 2 is a hardware configuration block diagram of a control device in the embodiment of the present application;
fig. 3 is a hardware configuration block diagram of a control device in the embodiment of the present application;
fig. 4 is a hardware configuration diagram of a display device in an embodiment of the present application;
FIG. 5 is a software configuration diagram of a display device according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a multimedia framework according to an embodiment of the present application;
FIG. 7 is an interaction diagram of an inter-process communication method according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a Binder communication mechanism model according to an embodiment of the present application;
fig. 9 is a schematic flow chart of registering client information from a client to a server in the embodiment of the present application;
fig. 10 is a schematic flow chart of processing audio and video data by a server in an embodiment of the application;
fig. 11 is a schematic flow chart of a server transmitting media data to a client in an embodiment of the present application;
fig. 12 is a schematic flow chart of a server transmitting media data to a client in the embodiment of the present application;
FIG. 13 is a flowchart illustrating a client processing media data according to an embodiment of the present application;
fig. 14 is a schematic flow chart of inter-process communication between a Camera client and a Camera server in the embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The embodiments described in the examples below do not represent all embodiments consistent with the present application. Merely as examples of systems and methods consistent with some aspects of the present application as detailed in the claims.
It should be noted that the brief description of the terms in the present application is only for convenience in understanding the embodiments described below, and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms "first," second, "" third and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar or similar objects or entities and not necessarily for limiting a particular order or sequence, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware or/and software code that is capable of performing the function associated with that element.
Fig. 1 is a schematic diagram of a usage scenario of a display device according to an embodiment. As shown in fig. 1, the display apparatus 200 may communicate with the server 300 through the internet, and a user may operate the display apparatus 200 through the control device 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes at least one of infrared protocol communication and/or bluetooth protocol communication, and other short-range communication modes, and the display device 200 is controlled by a wireless or wired mode. The user may control the display device 200 by inputting a control instruction through at least one of a key on a remote controller, a voice input, a control panel input, and the like.
In some embodiments, the control apparatus 100 may also be a mobile terminal, such as a mobile phone, where the communication between the mobile terminal and the display device 200 includes at least one of internet protocol communication or bluetooth protocol communication, and other short-range communication and long-range communication modes. The user may control the display device 200 by inputting user instructions through at least one of keys, voice input, control panel input, etc. on the mobile terminal. Fig. 2 exemplarily shows a block diagram of a configuration of the control apparatus 100, which is exemplified by a remote controller. As shown in fig. 2, the control device 100 includes a controller, a communication interface, a user input/output interface, a memory, and a power supply.
Fig. 3 exemplarily shows a block diagram of a configuration of the control apparatus 100, which is exemplified by a mobile terminal. As shown in fig. 3, the control device 100 includes at least one of a Radio Frequency (RF) circuit, a memory, a display unit, a camera, a sensor, an audio circuit, a wireless fidelity (Wireless Fidelity, wi-Fi) circuit, a processor, a bluetooth circuit, and a power supply.
Fig. 4 shows a hardware configuration block diagram of the display device 200 in accordance with an exemplary embodiment.
In some embodiments, display apparatus 200 includes at least one of a modem 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, memory, a power supply, a user interface.
In some embodiments, communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wi-Fi module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver. The display apparatus 200 may establish transmission and reception of control signals and data signals with the control device 100 or the server 300 through the communicator 220.
In some embodiments, the external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, or the like. The input/output interface may be a composite input/output interface formed by a plurality of interfaces.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored on the memory. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, a user may input a user command through a Graphical User Interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
In some embodiments, a "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user that enables conversion between an internal form of information and a form acceptable to the user. A commonly used presentation form of the user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include at least one of a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
Referring to FIG. 5, in some embodiments, the system is divided into four layers, from top to bottom, an application layer (referred to as an "application layer"), an application framework layer (Application Framework layer) (referred to as a "framework layer"), a An Zhuoyun row (Android run) and a system library layer (referred to as a "system runtime layer"), and a kernel layer, respectively.
In some embodiments, at least one application program is running in the application program layer, and these application programs may be a Window (Window) program of an operating system, a system setting program, a clock program, or the like; or may be an application developed by a third party developer. In particular implementations, the application packages in the application layer are not limited to the above examples.
The framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. The application framework layer corresponds to a processing center that decides to let the applications in the application layer act. Through the API interface, the application program can access the resources in the system and acquire the services of the system in the execution.
As shown in fig. 5, the application framework layer in the embodiment of the present application includes a manager (manager), a Content Provider (Content Provider), and the like, where the manager includes at least one of the following modules: an Activity Manager (Activity Manager) is used to interact with all activities that are running in the system; a Location Manager (Location Manager) is used to provide system services or applications with access to system Location services; a Package Manager (Package Manager) for retrieving various information about an application Package currently installed on the device; a notification manager (Notification Manager) for controlling the display and clearing of notification messages; a Window Manager (Window Manager) is used to manage icons, windows, toolbars, wallpaper, and desktop components on the user interface.
In order to expand the functions of the display apparatus 200, a variety of media functions are configured in the display apparatus 200 to satisfy different use demands of users. Each media function is implemented through a corresponding Client (Client), and the Client can process and integrate the media data accordingly to obtain the media data meeting the user requirements. The media data used by the client needs to meet the corresponding data requirements, such as data type, data format, etc. The Server is used for providing the acquisition function of the audio and video data and processing the acquired audio and video data to obtain the media data meeting the data requirements of the client. Typically, clients are separate processes from the servers. In this embodiment, the process where the client is located may be referred to as a first process, and the process where the server is located may be referred to as a second process, where the first process and the second process are different processes. Therefore, the data transmission process between the client and the server is actually the data transmission process between the first process and the second process, namely the inter-process communication IPC.
For example, referring to the multimedia framework schematic diagram shown in fig. 6, the multimedia framework includes an application layer where a Camera (Camera) client is located, a framework layer where a Camera server is located, and a core (Kernel) layer. And the Camera client runs in the first process and provides a Camera driving interface (Camera backend Interface) for the outside. The Camera client is used to display media data, such as audio data, video data, and the like. Further, by processing media data to realize other functions, such as functions of account login, payment, video Call (Video Call), web instant messaging (Web Real-Time Communication), gesture manipulation, interactive game, and the like, through algorithms such as facial recognition, gesture recognition, and the like, and related business logic. Still further, more rich scenes are extended by integrating other functional modules, such as Virtual Reality (VR)/augmented Reality (Augmented Reality, AR) modules.
And the Camera server runs in the second process and provides a calling interface for the outside. The Camera server is used for providing a data acquisition service (capture service) and a data encoding and decoding service (media codec service). Illustratively, an advanced Linux sound architecture (Advanced Linux Sound Architecture, ALSA) in the core layer is invoked by the data collection service to collect audio data. The Linux video device driver (video for Linux2, V4L 2) in the core layer is called through the data acquisition service to acquire video data. A hard decoding (hardware decoder) driver is invoked through the data codec service to hard decode the acquired audio-video data. A hard encoder (hard encoder) driver is invoked through the data codec service to hard encode the acquired audio-video data. Therefore, the Camera server can collect and process the media data meeting the display requirements of the Camera client.
The process of the Camera server transmitting the media data to the Camera client is equivalent to an inter-process communication process of transmitting the media data between the second process and the first process. To implement inter-process communication, the multimedia framework includes an inter-process communication IPC module to provide an externally exposed inter-process communication client interface (IPC Client Interface) at a Camera backend interface (Backend Interface) of the Camera client, to provide an externally exposed inter-process communication service interface (IPC Service Interface) for the Camera server, which outputs media data from IPC Service Interface, and the Camera client receives the media data through IPC Client Interface.
The following description is made for a communication process between the existing client and the server, that is, a communication process between the first process and the second process:
the client and the server have a User Space (User Space), i.e., a User Memory (User Memory), respectively, and the server cannot directly access data stored in the User Memory of the client, and the client cannot directly access data stored in the User Memory of the server. Thus, there is different direct communication between the client and the server. A Socket (Socket) communication mechanism may be employed to implement communication between the client and the server. At this time, the interface of the first process and the interface of the second process are biz interfaces configured based on the whole network protocol, the configuration of the interfaces is complex, and the calling performance of the interfaces is poor. In addition, in the process of data transmission, a transmission intermediary is needed, that is, a Kernel Space (Kernel Space) needs to be configured in the Kernel layer, that is, a shared Memory (shared Memory) is needed, and both the client and the server can directly access the shared Memory. Taking a process that the Camera server transmits media data to the Camera client as an example for explanation, the Camera server copies the media data to the shared memory first, and the Camera client copies the cached media data from the shared memory, so that the media data is transmitted from the second process to the first process through the two copying processes. Therefore, by adopting a Socket communication mechanism to realize inter-process communication, the interface calling performance is poor, and the media data needs to be duplicated and transmitted for many times among processes, so that the transmission time is long, and obvious transmission delay can be caused. In practical applications, media data, which is directly perceived by a user, will be perceived by the user significantly once the transmission delay is above a certain threshold. For example, when the transmission delay of the audio data reaches 100ms and the transmission delay of the video data reaches 30ms, the audio data is obviously perceived, so that the experience of the user is greatly affected.
In order to improve the efficiency of inter-process communication, the embodiment of the present application provides an inter-process communication method, which may refer to a flow chart shown in fig. 7, and the specific flow is as follows:
s701, the client sends a data acquisition request to the server based on the shared memory.
In this embodiment, the client and the server adopt a Binder communication mechanism in the data transmission process based on the shared memory. The following description is made on a data transmission process between a client and a server based on a Binder communication mechanism:
the Binder communication mechanism model shown in fig. 8 includes three parts, namely a client, a server and a Binder Driver. The client and the server belong to a first process and a second process, and respectively provide user memory, and the Binder driver provides shared memory, and in this embodiment, the Binder communication mechanism model further includes anonymous memory (Anonymous Shared Memory, ashmem).
The display device 200 is typically configured to automatically run the second process after power-on, i.e. to start various services in the second process in turn. Only after the service is started can the service function properly. Therefore, when the client requests the service from the service end, the client can request successfully only after the service is started, otherwise, the client fails to request. At this time, the client needs to wait for the service to finish starting, and requests again.
The display device 200 provides various services, such as Camera service, positioning service, etc., and the client needs to accurately call the corresponding service to obtain the corresponding media data. The service end corresponding to each service can register the name of the service end in the Binder driver, and the Binder driver generates service reference information corresponding to the name of the service end based on the name of the service end, wherein the service reference information is used for guiding to the corresponding service end. For example, the Camera server registers the name of the server in the Binder driver, such as Camera, and the Binder driver generates corresponding Service reference information, such as m Camera Service, and establishes a mapping relationship between Camera and m Camera Service. When a Camera client requests to acquire a Camera Service provided by a Camera server, an acquisition request is sent to a Binder driver, the acquisition request carries the name of the requested server, namely the Camera, the Binder driver determines m Camera Service corresponding to the Camera based on the mapping relation between the Camera and the m Camera Service, and returns the m Camera Service to the Camera client, and the Camera client can guide the Camera client to the Camera server according to the m Camera Service so as to acquire the Camera Service.
In general, one server may correspond to multiple clients, i.e., the same server may perform inter-process communication with multiple clients. Thus, the server side also provides client management functions to ensure accurate communication between each client side and the server side. Reference may be made to the flow shown in fig. 9, which is specifically as follows:
and S901, the client sends a registration request to the server based on the shared memory, wherein the registration request comprises client reference information, and the client reference information and the name of the client have a mapping relation and are used for guiding to the client.
S902, the server receives a registration request sent by the client.
S903, the server responds to the registration request, and adds the mapping relation between the name of the client and the client reference information to a mapping relation table, wherein the mapping relation table is composed of a first mapping relation between the name of each client and the client reference information, and a second mapping relation between the name of each data processing method and the method reference information, and the method reference information is used for guiding to the corresponding data processing method.
Each client registers the name of the client with the server by referring to the flow shown in fig. 9, and the following description is given by taking the registration process of one of the clients as an example:
The server provides a mapping relationship table (Hashmap) to manage the corresponding plurality of clients. The mapping relation table consists of two parts, namely a first mapping relation related to the client and a second mapping relation related to the data processing method, and the first mapping relation and the second mapping relation are explained as follows:
for the first mapping relationship:
and the client sends a registration request to the server in the second process through the shared memory. First the client sends a registration request to the Binder driver, where the registration request carries information of the client, e.g. the address of the client, etc. can be mapped directly to the client. And generating a name of the client and client reference information corresponding to the information of the client by a Binder driver, wherein the name of the client and the client reference information have a mapping relation, namely a first mapping relation, and the client reference information is used for guiding to the client, namely the client reference information can be used for guiding to the information of the client so as to be directly mapped to the client based on the information of the client. Illustratively, the information of the Client is a media access control address (Media Access Control Address, MAC) 1, and the binder driver generates the name Client1 and the Client reference information client_ref1 of the corresponding Client. The Binder driver establishes a first mapping relationship between Client1 and client_ref1, and stores information MAC1 of the Client in a shared memory. Then, the Binder driver only carries the first mapping information in the registration request, and sends the registration request to the server in the second process, without sending specific client information to the server. Therefore, the data volume transmitted between the Binder driver and the server can be reduced, and the transmission efficiency is improved.
After receiving the registration request, the server extracts a first mapping relation from the registration request, and adds the first mapping relation to a mapping relation table. With reference to the above process, each client registers information to the server, and the server may obtain a first mapping relationship corresponding to each client, and store each first mapping relationship in the mapping relationship table, so as to manage each first mapping relationship.
For the second mapping relationship:
the server may provide various data processing methods for processing audio and video data, such as a pulse code modulation (Pulse Code Modulation, PCM) method, a color coding method (YUV), a basic code stream (Elementary Stream, ES) coding method, and the like. Different clients use different media data based on different media function requirements, namely, the corresponding data processing method is needed to process the audio and video data so as to obtain the media data meeting the data requirements. The server needs to register each data processing method provided by the server, and the server sends a registration request to the Binder driver, wherein the registration request carries information of the data processing method provided by the server, and the information of the data processing method can be directly mapped to the data processing method in the server. The Binder driver generates a name of a data processing method and method reference information corresponding to information of the data processing method, the name of the data processing method and the method reference information have a mapping relation, namely a second mapping relation, the method reference information is used for guiding to the data processing method, namely the method reference information can be used for guiding to the information of the data processing method so as to be directly mapped to the data processing method in the server side based on the information of the data processing method. Illustratively, the information of the data processing method is a, and the Binder driver generates the name handle1 and the method reference information handle_1 of the corresponding data processing method. The Binder driver establishes a second mapping relationship between handle1 and handle_1, and stores information a of the data processing method in the shared memory. Then, the Binder driver only returns the second mapping to the server.
After receiving the second mapping relation, the server adds the second mapping relation into the mapping relation table. And referring to the process, the server registers each data processing method provided by the server to obtain a second mapping relation corresponding to each data processing method, and stores each second mapping relation into a mapping relation table so as to manage each second mapping relation.
Based on the second mapping relationship in the generated mapping relationship table, the client can forward call the data processing method provided to the server, and based on the first mapping relationship in the generated mapping relationship table, the server can call back to the corresponding client to send media data to the client.
When the client generates a requirement for acquiring media data, the client sends a data acquisition request to the server by adopting a Binder communication mechanism based on the shared memory. The data acquisition request includes at least the name of the client and the name of the data processing method to be invoked. In this embodiment, the name of the data processing method to be called may be referred to as the name of the target data processing method.
S702, the server receives a data acquisition request sent by the client.
The server can directly access the shared memory, so that the server can directly receive the data acquisition request sent by the client from the shared memory and further extract the name of the client and the name of the target data processing method from the data acquisition request.
S703, the server responds to the data acquisition request and processes the audio and video data corresponding to the client to obtain media data.
After receiving the data acquisition request, the server prepares the media data for the client. Firstly, the server collects audio and video data corresponding to the client through a data collection service, for example, a microphone module is called to collect audio data, and a camera module is called to collect video data.
The server may refer to the flow described in fig. 10, and process the collected audio and video data, specifically as follows:
s1001, determining target method reference information corresponding to the name of the target data processing method based on the mapping relation table.
S1002, acquiring the corresponding target data processing method according to the target method reference information.
S1003, processing audio and video data corresponding to the client by using the target data processing method to obtain the media data.
The server side can obtain a second mapping relation, namely, a mapping relation between the name of the data processing method and the method reference information, based on the mapping relation table. The server may determine method reference information corresponding to the name of the target data processing method, that is, target method reference information, based on the second mapping relationship. The target method reference information directs to a data processing method provided by the server, i.e., a target data processing method. Therefore, the server side can acquire a corresponding target data processing method according to the target method reference information, and process the acquired audio and video data by using the target data processing method so as to obtain media data meeting the data requirements of the client side.
And S704, the server stores the media data in an anonymous memory, and sends address information to the client based on the shared memory.
The server transmits the processed media data to the client, and in order to improve the transmission efficiency of the media data, in this embodiment, the transmission of the media data does not go through the shared memory, but an anonymous memory is configured separately for transmitting the media data. The anonymous memory is arranged at the core layer, so that both the client and the server can directly access the anonymous memory. The anonymous memory is a dynamic memory, that is, the size of the anonymous memory is not strictly limited, that is, the storage space of the anonymous memory can be correspondingly adjusted according to the data size of the actually written cache data. Although the shared memory and the anonymous memory are both cache spaces which can be directly accessed by the client and the server, the shared memory has strict and fine limitation on the storage space, so that the data writing process is severely limited. Obviously, the storage space of the anonymous memory is more flexible, and the writing of data is convenient, so that the writing efficiency of the data can be improved.
The server side can realize asynchronous transmission of the media data based on the anonymous memory, namely, the server side can send address information to the client side through the shared memory while writing the media data into the anonymous memory so as to inform the client side of the storage position of the media data and instruct the client side to start to acquire the media data. The client can start to acquire the media data without waiting for the server to write the media data into the anonymous space, thereby saving the overall transmission time of the media data.
The process of transmitting media data to the client by the server based on the anonymous memory may refer to the flowchart shown in fig. 11, which is specifically as follows:
s1101, storing the media data in an anonymous memory.
S1102, generating address information corresponding to a storage location of the media data in the anonymous memory.
And S1103, sending the address information to the client based on the shared memory.
After generating the media data, the server first determines a buffer space in the core layer, which can be used for buffering the data, and further determines a free space currently available in the buffer space. The server divides a target storage space with corresponding size from the free space based on the data size of the media data so as to store the media data. The target storage space has corresponding address information, for example, a location of the target storage space in the overall cache space, or a location of the target storage space in the overall memory space of the display device 200, or the like. The address information may be shown in a File Descriptor (FD). According to the address information of the target storage space, the target storage space can be searched, and cached media data can be obtained from the target storage space.
The server stores the media data in a target storage space of the anonymous memory, and sends address information corresponding to the target storage space to the client through the shared memory so as to quickly inform the client of the storage position of the media data in the anonymous memory.
Because the server can communicate with a plurality of clients at the same time, the accuracy of data transmission between the server and each client needs to be ensured, i.e. address information is accurately transmitted to the corresponding client. The process of sending address information from the server to the client may refer to the flow shown in fig. 12, which is specifically as follows:
s1201, determining, based on the mapping relation table, the client reference information corresponding to the name of the client.
S1202, based on the shared memory, sending the address information to the client according to the client reference information.
The process of sending address information to the client by the server is actually a callback process. The name of the client is carried in the data acquisition request sent by the client to the server, and the server can determine the client reference information corresponding to the name of the client, namely the target client reference information, based on the first mapping relation in the mapping relation table, namely the mapping relation between the name of the client and the client reference information. From the above, in the process of registering information by the client, the client stores the mapping relationship between the information of the client and the reference information of the client in the Binder driver, so that the client can determine the information of the client corresponding to the reference information of the target client, that is, the information of the target client, based on the mapping relationship. The server can directly map the address information to the corresponding client according to the information of the client, so that the transmission process of the address information is completed rapidly. Because the first mapping relations corresponding to the clients are different, the server side can ensure the accuracy of the correspondence between the address information and the clients in the process of sending the address information to the clients based on the first mapping relations.
S705, the client receives the address information sent by the server.
The client may receive address information sent by the server, the address information indicating a location in anonymous memory where media data corresponding to the client is stored.
S706, the client acquires the media data from the anonymous memory according to the address information.
After receiving the address information, the client can directly access a storage space corresponding to the storage position in the anonymous memory according to the storage position indicated by the address information, and acquire media data from the storage space. The media data is audio and video data processed by the service end based on the data processing method appointed by the client, and can meet the data requirement of the client, thereby ensuring the function realization quality of the client. Meanwhile, in the data transmission process between the client and the server, the number of times of inter-process communication between the client and the server can be effectively reduced based on a Binder communication mechanism, and the speed of acquiring the media data by the client can be effectively increased based on the mode of transmitting the media data with larger data quantity in an anonymous memory and simultaneously transmitting address information with smaller data quantity in a shared memory. Moreover, the flexible storage space of the anonymous memory can effectively improve the performance and efficiency of writing the media data by the server. Therefore, based on the inter-process communication method provided by the embodiment, the data transmission efficiency between different processes can be effectively improved, the data transmission delay is reduced, and the experience of a user is further improved.
The media data acquired by the client is in the form of data stream, namely the server continuously acquires the audio and video data, and processes the audio and video data stream into the media data stream so as to continuously send the media data stream to the client. Thus, the client needs to use the acquired media data in a certain order. The process of processing media data by the client may refer to the flow shown in fig. 13, which is specifically as follows:
s1301, storing the media data into a circular buffer area, wherein the media data stored in the circular buffer area are arranged according to the receiving time.
S1302, extracting media data with earliest receiving time from the circular buffer, and releasing the space occupied by the media data with earliest receiving time in the circular buffer.
After receiving the media data, the client directly uses the media data if there is no media data currently being executed with the task. If media data currently executing a task exists, the client first stores the received media data in a circular buffer (ring buffer) to manage each media data waiting for use through a ring buffer queue. Typically, media data is primarily related to the time of generation. The earlier the generation time of the media data, the earlier the server transmits the media data to the client, and thus, the reception time of the media data by the client coincides with the generation time of the media data. The media data stored in the circular buffer may be arranged according to the reception time, for example, from front to back according to the reception time, or from back to front according to the reception time.
In order to ensure the time sequence of the media data, the client side extracts the media data with earliest receiving time from the circular buffer area and uses the media data. Illustratively, if the media data are arranged front to back by the time of receipt, the media data with the earliest time of receipt is located in the first bit of the ordering; if the media data are arranged from the rear to the front in terms of the reception time, the media data whose reception time is earliest is located at the end of the ordering. Thus, the time sequence of the media data used by the client can be ensured.
And after the client extracts the media data with the earliest receiving time, releasing the space occupied by the media data in the circulating buffer area. At this time, the media data which has been extracted is no longer present in the circular buffer, and the media data located in the next bit of the media data is changed to the media data with the earliest receiving time, and is continuously extracted. In this way, it can be effectively ensured that the media data that has been extracted will not be repeatedly extracted.
And each media data is managed based on the ring buffer queue, so that each media data can be effectively processed by the client in order, without omission and repetition, and the realization quality of corresponding media functions is further ensured.
Based on the above-mentioned inter-process communication method, the following specific embodiments are provided for explanation:
referring to the inter-process communication diagram shown in fig. 14, wherein the solid line represents the flow relationship of the flow and the broken line represents the flow relationship of the data. Fig. 14 illustrates communication between a Camera client and a Camera server, where the Camera client runs in a first process and the Camera server runs in a second process, and provides a Camera service, and the first process and the second process are different processes.
After the display device 200 is started, the second process is automatically operated, that is, each service in the second process is sequentially started. The Camera client transmits a service request including a service name, for example, a Camera service, to the second process to acquire the Camera service. At this time, the display apparatus 200 determines whether the Camera service has been started up in response to the service request. If the Camera service is not started, feedback information of the failed request is fed back to the client. Based on the feedback information, the client waits for a certain time, for example, 10ms, and sends a service request again to request acquisition of the Camera service. And if the Camera service is started, continuing to execute the follow-up service request action. The Camera server registers the service information provided by the Camera server in advance. For example, the Camera server sends a registration request to the Binder driver, where the registration request carries information of the Camera Service, the Binder driver generates a name Camera of a corresponding Camera Service and a corresponding Service reference relationship m Camera Service based on the information of the Camera Service, and the Service reference relationship m Camera Service can guide to the Camera, that is, the Camera client accurately calls to the Camera Service. The Binder driver establishes a mapping relation between the Camera and the m Camera Service, and caches the mapping relation in the shared memory, so that the Service registration process of the Camera server is completed. The Camera client carries the name of the service to be requested, namely Camera, based on the service request sent to the second process by the shared memory. Based on the mapping relation between the Camera and the m Camera Service cached in the shared memory, the Binder driver can feed back a Service reference relation m Camera Service to the Camera client, and the client can accurately call the Camera Service according to the Service reference relation m Camera Service, namely, the client starts communication with the Camera server.
After determining that the Camera client communicates with the Camera server, the Camera client registers information of the client with the Camera server and requests media data from the Camera server. The starting nodes of the two tasks have no strict requirements, and the fact that the information that the Camera client registers the client to the Camera server is guaranteed to be finished between the Camera server and the Camera client for sending media data is guaranteed.
Describing the process of registering information of the client by the Camera client to the Camera server, the Camera client sends client reference information to the Camera server, and the client reference information is used for guiding the client. The Camera client sends a registration request to the Camera server based on the shared memory, wherein the registration request comprises information of the Camera client, and the information of the Camera client can be directly mapped to the Camera client. The Binder driver generates a name of a corresponding Client, e.g., client1, based on the information of the Camera Client, and generates corresponding Client reference information, e.g., client_ref1. The Binder driver establishes a first mapping relation between the Client1 and the client_Ref1, establishes a mapping relation between the client_Ref1 and the information of the Camera Client, sends the first mapping relation to the Camera server, and stores the mapping relation between the client_Ref1 and the information of the Camera Client in a shared memory. And the Camera server receives the first mapping relation and adds the first mapping relation into a mapping relation table for management.
Further, if there are other clients that communicate with the Camera server, the process of registering the information of the client with the Camera server by referring to the Camera client may be referred to, and the information of the corresponding client may be registered. As shown in fig. 14, the Camera server manages clients 1 to Client n according to the first mapping relationship corresponding to clients 2 to Client n.
The Camera server provides a plurality of data processing methods, for example, data processing methods 1 to N, and the Camera server registers information of the N data processing methods in advance. For example, a registration request is sent to the Binder driver, the registration request carrying information of the data processing method. After the register receives the registration request, the Binder generates a name of the data processing method corresponding to the information of the data processing method, for example, a name handle1 corresponding to the data processing method 1, and method reference information handle_1 corresponding to the data processing method 1, where the method reference information handle_1 may direct to the data processing method 1, that is, call the data processing method 1. The Binder driver establishes a second mapping relation between the handle1 and the method reference information handle_1, and returns the second mapping relation to the Camera server. The Camera server adds the second mapping relation to the mapping relation table to manage the data processing method 1. The other data processing methods refer to the information registering mode of the data processing method 1 to obtain a corresponding second mapping relation. As in the second mapping relationship corresponding to handles_2 to N in fig. 14. And the Camera server manages the N data processing methods based on the mapping relation table.
Describing a process that the Camera client requests media data from the Camera server, the Camera client sends a data acquisition request based on the shared memory to request the media data. The data acquisition request includes the name of the Camera Client, i.e., client1, and the name of the data processing method to be invoked, e.g., handle1. After the Camera server receives the data acquisition request, the name of the data processing method, namely handle1, is extracted from the data acquisition request. The Camera server determines that the method reference information corresponding to handle1 is handle_1 based on a second mapping relation in the mapping relation table, namely a second mapping relation between handle1 and handle_1, and guides the method reference information to a corresponding data processing method, namely a data processing method 1 further based on the method reference information is handle_1. The data processing method 1 is the data processing method to be called by the Camera client. The data processing method 1 is, for example, a PCM method. The Camera server acquires audio data corresponding to the Camera client by calling the microphone module, acquires video data corresponding to the Camera client by the Camera module, and processes the audio and video data by using the data processing method 1, namely a PCM method to obtain media data, such as media data 1.
The Camera server side sends the media data 1 to the Camera client side based on a callback process. The Camera server can extract Client1 from the data acquisition request, and determines that the Client reference information is client_ref1 based on a first mapping relationship in the mapping relationship table, namely, a first mapping relationship between Client1 and client_ref1. The Camera server stores the media data 1 in an anonymous memory, and generates corresponding address information, such as address information a, based on the storage location of the media data 1. The Camera server side guides to Client1, namely a Camera Client side, based on the shared memory according to Client side reference information client_Ref1, and sends address information a to the Camera Client side.
After the Camera client receives the address information a, media data 1 is acquired from the anonymous memory according to the address information a. If the Camera client does not currently have media data being processed, the media data 1 is directly processed. If the Camera client currently has the media data being processed, the media data 1 is cached in the circular cache area according to the sequence of the receiving time. Illustratively, media data 1 is identified as data_buffer_1 in the circular buffer. The Camera client continuously receives media data, such as media data 2-N, transmitted by the Camera server, and caches the media data 2-N in the circular cache area according to the sequence of the receiving time. Data_buffers_2 to N as shown in FIG. 14.
After the processing of the media data currently being processed by the Camera client is finished, extracting the media data with the earliest receiving time from the circular buffer area for processing. For example, the media data with the earliest reception time is data_buffer_1, i.e., media data 1, which is arranged at the first bit of the queue. The Camera client extracts the media data 1 from the circular buffer for processing, releases the storage space occupied by the media data 1 in the circular buffer, and rearranges the rest media data. And after finishing processing the media data 1, continuing to circularly process the rest media data. Therefore, the processing order, omission and repetition can be guaranteed.
According to the technical scheme, when the client and the server, which belong to the first process and the second process in the display device, carry out data transmission, the server firstly stores media data in the anonymous memory, avoids directly transmitting the media data with larger data quantity to the client, but adopts a Binder communication mechanism, and transmits address information to the client through the shared memory so as to indicate the storage position of the media data in the anonymous memory to the client through the address information. The client can directly acquire the data to be displayed from the anonymous memory according to the address information. By the inter-process communication method, the direct transmission of the media data with larger data volume between the client and the server can be avoided, so that the data transmission time is saved, and the transmission delay of the media data is effectively reduced.
The foregoing detailed description of the embodiments is merely illustrative of the general principles of the present application and should not be taken in any way as limiting the scope of the invention. Any other embodiments developed in accordance with the present application without inventive effort are within the scope of the present application for those skilled in the art.

Claims (10)

1. A display device, characterized by comprising:
a display configured to display media data corresponding to a client, the media data provided by a server, wherein the client is running in a first process and the server is running in a second process;
a controller configured to:
receiving a data acquisition request sent by the client based on a shared memory, wherein a Binder communication mechanism is adopted for data transmission based on the shared memory;
responding to the data acquisition request, processing audio and video data corresponding to the client to obtain the media data;
and storing the media data in an anonymous memory, and sending address information to the client based on the shared memory so that the client can acquire the media data from the anonymous memory according to the address information.
2. The display device of claim 1, wherein prior to receiving the data retrieval request sent by the client based on the shared memory, the controller is further configured to:
receiving a registration request sent by the client based on the shared memory, wherein the registration request comprises client reference information, and the client reference information and the name of the client have a mapping relation and are used for guiding the client;
and responding to the registration request, adding the mapping relation between the name of the client and the client reference information to a mapping relation table, wherein the mapping relation table is composed of a first mapping relation between the name of each client and the client reference information and a second mapping relation between the name of each data processing method and the method reference information, and the method reference information is used for guiding to the corresponding data processing method.
3. The display device according to claim 2, wherein the data acquisition request includes a name of a target data processing method, and the controller, in response to the data acquisition request, processes the audio-video data corresponding to the client to obtain the media data, is configured to:
Determining target method reference information corresponding to the name of the target data processing method based on the mapping relation table;
acquiring a corresponding target data processing method according to the target method reference information;
and processing the audio and video data corresponding to the client by using the target data processing method to obtain the media data.
4. The display device of claim 1, wherein the controller stores the media data in an anonymous memory and sends address information to the client based on the shared memory, configured to:
storing the media data in an anonymous memory;
generating address information corresponding to a storage position of the media data in the anonymous memory;
and sending the address information to the client based on the shared memory.
5. A display device according to claim 2 or 3, wherein the data acquisition request comprises a name of the client, the controller being configured to send address information to the client based on the shared memory, the controller being configured to:
determining the client side reference information corresponding to the name of the client side based on the mapping relation table;
And based on the shared memory, sending the address information to the client according to the client reference information.
6. A display device, characterized by comprising:
a display configured to display media data corresponding to a client, the media data provided by a server, wherein the client is running in a first process and the server is running in a second process;
a controller configured to:
sending a data acquisition request to the server based on a shared memory, wherein a Binder communication mechanism is adopted for data transmission based on the shared memory;
receiving address information sent by the server based on the shared memory;
and acquiring the media data from the anonymous memory according to the address information.
7. The display device of claim 6, wherein prior to sending the data acquisition request to the server based on the shared memory, the controller is further configured to:
and sending a registration request to the server based on the shared memory, wherein the registration request comprises client reference information, and the client reference information and the name of the client have a mapping relation and are used for guiding to the client.
8. The display device of claim 6 or 7, wherein after retrieving the media data from anonymous memory, the controller is further configured to:
storing the media data into a circular buffer area, wherein the media data stored in the circular buffer area are arranged according to the receiving time;
and extracting the media data with earliest receiving time from the circulating buffer area, and releasing the space occupied by the media data with earliest receiving time in the circulating buffer area.
9. An inter-process communication method, applied to a client in a display device, where the display device is configured to display media data corresponding to the client, where the media data is provided by a server, and the client is running in a first process and the server is running in a second process, and the method includes:
receiving a data acquisition request sent by the client based on a shared memory, wherein a Binder communication mechanism is adopted for data transmission based on the shared memory;
responding to the data acquisition request, processing audio and video data corresponding to the client to obtain the media data;
and storing the media data in an anonymous memory, and sending address information to the client based on the shared memory so that the client can acquire the media data from the anonymous memory according to the address information.
10. An inter-process communication method, applied to a server in a display device, where the display device is configured to display media data corresponding to a client, where the media data is provided by the server, and the client is running in a first process and the server is running in a second process, and the method includes:
sending a data acquisition request to the server based on a shared memory, wherein a Binder communication mechanism is adopted for data transmission based on the shared memory;
receiving address information sent by the server based on the shared memory;
and acquiring the media data from the anonymous memory according to the address information.
CN202210093739.6A 2022-01-26 2022-01-26 Display equipment and inter-process communication method Pending CN116546235A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210093739.6A CN116546235A (en) 2022-01-26 2022-01-26 Display equipment and inter-process communication method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210093739.6A CN116546235A (en) 2022-01-26 2022-01-26 Display equipment and inter-process communication method

Publications (1)

Publication Number Publication Date
CN116546235A true CN116546235A (en) 2023-08-04

Family

ID=87454716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210093739.6A Pending CN116546235A (en) 2022-01-26 2022-01-26 Display equipment and inter-process communication method

Country Status (1)

Country Link
CN (1) CN116546235A (en)

Similar Documents

Publication Publication Date Title
CN106716954B (en) Real-time sharing method, system and computer readable memory during telephone call
CN105659619B (en) Image processing apparatus and its control method
CN112558825A (en) Information processing method and electronic equipment
US20110066971A1 (en) Method and apparatus for providing application interface portions on peripheral computing devices
US10375342B2 (en) Browsing remote content using a native user interface
EP2912562B1 (en) Updating services during real-time communication and sharing-experience sessions
WO2022257699A1 (en) Image picture display method and apparatus, device, storage medium and program product
JP2023503679A (en) MULTI-WINDOW DISPLAY METHOD, ELECTRONIC DEVICE AND SYSTEM
CN112527174B (en) Information processing method and electronic equipment
AU2013345759A1 (en) Transmission system and program
CN112527222A (en) Information processing method and electronic equipment
CN113507646A (en) Display device and multi-tab-page media asset playing method of browser
CN112269516A (en) Display method and access method of desktop element data and related devices
CN110968362A (en) Application running method and device and storage medium
WO2021052488A1 (en) Information processing method and electronic device
CN112507329A (en) Safety protection method and device
CN114530148A (en) Control method and device and electronic equipment
CN116546235A (en) Display equipment and inter-process communication method
WO2023035619A1 (en) Scene rendering method and apparatus, device and system
CN115364477A (en) Cloud game control method and device, electronic equipment and storage medium
CN114007129A (en) Display device and network distribution method
CN114302101A (en) Display apparatus and data sharing method
CN113419650A (en) Data moving method and device, storage medium and electronic equipment
CN111782606A (en) Display device, server, and file management method
JP2019091448A (en) Method for expressing facilities, device, facilities, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination